thinking like a trader in a polarized world
a month or so ago i was at a SIG mixer talking to a couple of quant traders, you know doing the usual “so what should i do if i want your job?” thing when i asked them for book recommendations and what reading list they were assigned when they got their initial offers.
i expected a list of quant finance bibles. options pricing. stochastic calculus and stochastic processes. maybe even a 900-page tombstone of greek letters.
instead:
- not a single finance book
- one trader told me SIG explicitly told him not to learn anything from his university derivatives class before his internship
they essentially were told “we’ll teach you what you need, just don’t come in pre-loaded with the wrong mental models.”
and then their actual recommendation?
superforecasting by philip tetlock and dan gardner.
no pricing PDEs. just a book about how to think.
that conversation sparked something into my focus that had been floating around in the background of how I see the world: the real bottleneck in high-signal environments isn’t domain knowledge, it’s mindset. how you learn, how you update, how you handle uncertainty, etc.
finance is just one example but this exact problem is everywhere right now, especially in tech.
let's unpack further.
why trading firms don’t give a sh*t about a university derivatives course
firms like SIG already assume a bunch of things about you:
- you can learn fast
- you can do math under pressure
- you won’t completely melt like a child the first time your PnL swings against you
given that, teaching you some mechanical finance knowledge is easy. there's textbooks, internal wikis, training programs. if you’re curious and not an idiot, they can easily teach you the inner workings of a black-scholes model.
what’s hard to teach is:
- how you break ambiguous problems into smaller ones
- whether you notice base rates before your feelings
- whether you’re willing to say “73% likely” instead of “definitely”
- if you can change your mind in public without ego death
that’s the stuff superforecasting is about.
tetlock's research starts from a brutal finding:
most "experts" are barely better than chance at predicting political and economic events, and often worse than simple models because they're overconfident and under-calibrated. (SoBrief)
then he identifies a small group of people ("superforecasters") who consistently beat everyone else in large forecasting tournaments, including intelligence analysts who literally had classified information. (Stanford University)
they’re not all geniuses. they’re just relentlessly good at thinking in probabilities. they're forever learners and have a bayesian intuition ingrained beneath their perspectives.
furthermore, these are the key traits highlighted by tetlock that you see over and over in this special group:
- they think like foxes, not hedgehogs: lots of small models instead of one big theory of everything.
- they’re comfortable living in the gray area
- they keep score (brier scores, calibration curves, etc.) and let feedback hurt their ego enough to improve, but not so much they quit
- they update constantly instead of doubling down on a vibes-based approach
so when SIG assigns their traders the book, what they're saying is:
if your brain works like this, we can make you dangerous. if it doesn’t, we’d rather know before we hand you the keys to manage the risk of a portfolio worth a few hundred million.
that clicked for me because i already kind of live like that. i'm obsessive over semantics. i care about the exact hexcode of gray, not just “light” vs “dark” and superforecasting basically grabbed that impulse by the collar and offered a playbook on how to weaponize it.
foxes, hedgehogs, and late-stage capitalism
tetlock borrows this metaphor from isaiah berlin:
- hedgehogs: ideological, prevailingly confident, emotional and mental attachment one big idea, everything is a nail for their favorite hammer, additionally coined as "big-idea people"
- foxes: many small ideas (eye of a dragonfly), willing to mix and match, update, and generally able to admit when reality doesn’t match initial expectation
this maps cleanly onto how people talk about... basically everything now.
i was in berkeley visiting a friend recently, and we were talking about the rising presence of polarization today and he mentioned he hears a lot of ppl chalk these symptoms up to being a product of “late-stage capitalism”.
now it is important to note that (imho) a lot of people tend to use it as a catch-all explanation:
housing crisis? late-stage capitalism.
weird dating dynamics? late-stage capitalism.
nobody reads long-form anymore? late-stage capitalism.
and ofc, there are real structural problems:
- shrinking middle class
- odd, bimodal labor markets (either low-skill, low-pay or insanely “elite” roles with zero middle ground)
- people postponing or skipping kids because life is too expensive
- everything mediated by algorithms that reward outrage and spectacle
but the way we talk about this stuff is almost always hedgehog-brained:
- pick a single story (capitalism, patriarchy, immigration, ai tech bros, whatever)
- crank the narrative dial to 11
- ignore any data that doesn’t fit
- double down on whatever the initial perspective was and aggressively diminish any counter POVs in whatever way possible.
the more chaotic and complex the world gets, the more people cling to big simple stories. they want black and white. they want good guys and bad guys. they want to feel validated and as though their perspective is the right one.
and if you’re the type who genuinely enjoys arguing over the exact hex code of gray something is, you tend to come across to this group as:
- obnoxious
- boring
- overly serious
- pedantic
- maniacal, even
the wild part, which tetlock shows with actual data, is that those “semantics sickos” (the people obsessed with nuance, calibration, and conditional statements) are the ones who end up consistently less wrong about the world over long time horizons.
the gap between:
the world is collapsing because late-stage capitalism!!!
and
here are 9 overlapping forces, their base rates, and a 10-year probability distribution over outcomes
...is the same gap between hedgehogs and foxes.
and that brings us to everyone's favorite topic, the state of the job market in tech.
tech as a case study in polarization
if you want a live demo of late-stage capitalism + hedgehog thinking + miscalibrated forecasting, look at the tech job market right now.
on one side:
- largest number of CS majors and bootcamp grads in history
- endless chatgpt-generated linkedin posts about “breaking into tech in 90 days”
- viral advice threads from people who’ve written like 3 scripts total
on the other:
- a tiny fraction of people who are actually more cracked than ever
- deep understanding of fundamentals, systems, and their own personal toolbox
- can build nontrivial things that solve a problem end-to-end
- can reason from first principles instead of just parroting frameworks
- capable of learning things faster due the technology and sheer volume of information at our disposal today
and there’s basically no middle. it’s all polarized:
- tons of people who can talk about tech but can’t ship
- a small group who are terrifyingly competent
- and a hiring landscape that’s somehow both oversaturated and starved of talent at the same time
AI pours gasoline on this:
- makes it easier than ever to sound competent
- makes it harder to distinguish between “ctrl-c + ctrl-v straight from chatgpt” and “fundamentally understood and derived”
- punishes anyone whose value is “i can manually do something a model can now do faster”
thus, you end up with:
- more noise
- more people selling hedgehog stories (“AI will kill all dev jobs” vs “AI is just autocomplete lol”)
- hiring managers who overcorrect in dumb ways because they don’t have good mental models for the new landscape
and underneath all of this, almost zero discussion or thought is being done in tetlock-style probabilities.
no one ever says:
i’m around 60-70% confident junior SWE roles in big tech will be structurally rarer over the next 5 years, 10-20% confident they’ll partially rebound, and 10-20% that we’re wrong and this is a short-term overcorrection. here’s why and how i came to that conclusion, and here’s what could change my mind.
it’s always:
tech is dead for juniors
or
no, tech is fine you just need to grind leetcode bro
hedgehog brain vs hedgehog brain, yelling to see whose voice is louder and measuring their accuracy by the volume of attention their opinion receives.
why this made me want to trade
one of the big reasons i’m drawn to trading is the daily incentives line up with what i'm looking for in a long-term career:
- you are forced to put numbers on your beliefs
- you get marked to market daily
- being confidently wrong actually costs money, fast
- updating is mandatory, not optional
it’s basically superforecasting with higher stakes and better feedback loops.
also, selfishly, the traits the industry optimizes for are the same ones i've valued and coveted since i was in kindergarten trying to solve the equation to becoming the fastest kid on the playground:
- curiosity over credentials
- careful thinking over hot takes
- loving the messy details instead of worshipping one big story
- willingness to ask “what probability would make this a good bet?” instead of “am i right?”
when those SIG traders were told “don’t learn about derivatives, read superforecasting,” they were effectively told:
we care more about how you see the world than whether you know the closed-form solution for an option price.
i find that reassuring. and also a little damning for how the overwhelming majority of the world operates.
trying to be slightly ~~better~~ less wrong
so what do you actually do with all of this if you’re a student / early career person staring at the polarized chaos?
here’s what I’m trying, albeit very much still work-in-progress:
1. log everything
it's hard to argue with a spreadsheet. if you want to achieve something:
- set out a plan of action by breaking things down into smaller chunks that you can tackle on a day-to-day basis. the goal is to eliminate as much daily friction as possible.
- establish metrics that you can look back on to objectively assess progress and performance.
- log your daily & weekly progress and take notes on how you did each day/week like:
- when did you bite off more than you could chew?
- when did it feel like you were just going through the motions?
- or when was the work was too easy?
- frequently reassess in retrospect not only your progress, but also your metrics and success parameters to optimize what truly drives your success, and throwing out things that don't actually move the needle.
this approach sucks because again, you can't argue with what the numbers say about your laziness, inabilities, etc., and that’s the whole point.
2. force yourself out of hedgehog mode
whenever i catch myself saying “it’s because of X”:
- i pause and force out at least two other plausible underlying drivers, if not more
- then i ask: “if X were false, how much would that really change the outcome?”
- if the answer is “not much,” then X wasn’t the core model, and it was likely just my own personal narrative
this is especially important when talking about “late-stage capitalism” or “AI will...” type topics because they’re magnets for lazy explanations.
3. treat skills like a distribution, not a badge
in a polarized market:
- the middle gets hollowed out
- the bottom is crowded
- the top is small but extremely rewarded
if you’re serious, your only real option is to sprint as far up the right tail as you can.
concretely, for tech:
- attack fundamentals (systems, math, statistics) instead of memorizing interview patterns and obsessing over what you think a non-technical HR recruiter will find impressive on a resume
- build things that hurt a little. stuff that interacts with the real world, has users, hits limits
- ask “could GPT-7 do this better than me?” and if the answer is “yes, easily,” it's probably time to move upstack
4. keep your identity small
superforecasters generally don’t wrap their identity around a single ideology or prediction.
this is the opposite of twitter/linkedin-brain, where every take is tied to “my brand.”
the smaller your identity, the easier it is to say:
yeah, i was wrong, new data, updated.
because the faster you update, the less damage you do to your future self.
a bayesian mindset is probably one of the most important habits to being successful in anything you do in life.
closing thoughts
i honestly did not expect a casual question to someone whose shoes i'd like to be in to send me into a spiral about late-stage capitalism, polarization, and the death of the competent middle.
but it did force a pretty clean separation in my head:
-
there’s the part of the world that rewards being loud, simple, and certain
-
and there’s the part that rewards being calibrated, nuanced, and willing to say “i don’t know, but here’s a 65% guess”
right now, the first part is winning the attention war. it’s why presidential debates feel like reality TV, why everyone speaks in absolutes (lol), and why every problem is viewed through the lens of big-idea people.
the second part quietly runs trading desks, research labs, and the few teams in tech still doing non-clown work.
and i'm confident which side i'd like to end up on.
if you’re a fellow logic-obsessed semantics goblin who enjoys arguing about the exact hexcode of gray… good news: the world is polarizing, and that makes your nerd brain more valuable, not less, so long as you lean into the fox side, keep score, and stay just a little bit less wrong each year.