Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Three political positions that I think are severely underrated given the development of AGI:
1. @nathancofnas’ “hereditarian revolution” - the idea that the intellectual dominance of left-wing egalitarianism relies on group cognitive differences being taboo - is already very important.
But existing group cognitive differences pale in comparison to the ones that will emerge between baseline humans and:
- humans who leverage AI most effectively
- humans with brain-computer interfaces
- genetically engineered humans
- AIs themselves
Current cognitive differences already break politics; these will break it far more. So we need to be preparing for a future in which egalitarianism as an empirical thesis is (even more) obviously false.
I don’t yet have a concise summary of the implications of this position. But at the very least I want a name for it. Awkwardly, we don’t actually have a good word for “anti-egalitarian”. Hereditarian is too narrow (as is hierarchist) and elitist has bad connotations.
My candidate is “asymmetrist”. Egalitarianism tries to enforce a type of symmetry across the entirety of society. But our job will increasingly be to design societies where the absence of such symmetries is a feature not a bug.
2. Protectionism. Protectionism gets a bad rap, because global markets are very efficient. But they are very much not adversarially robust. If you are a small country and you open your borders to the currency, products and companies of a much larger country, then you will get short-term wealthier but also have an extremely hard time preventing that other country from gaining a lot of power over you in the long term. (As a historical example, trade was often an important precursor to colonial expansion. See also Amy Chua’s excellent book World on Fire, about how free markets enable some minorities to gain disproportionate power.)
When you’re poor enough, or the larger power is benevolent enough, this may well be a good deal! But we’re heading towards a future in which a) most people become far wealthier in absolute terms due to AI-driven innovation, and b) AIs will end up wielding a lot of power in not-very-benevolent ways (e.g. automated companies that have been given the goal of profit-maximization).
Given this, protectionism starts to look like a much better idea. The fact that it slows growth is not a problem, because society will already be reeling from the pace of change. And it lets you have much more control over the entities that are operating within your borders - e.g. you can monitor the use of AI decision-making within companies much more closely.
To put it another way, in the future the entire human economy will be the “smaller country” that faces incursions by currency, products and companies under the control of AIs (or humans who have delegated power to AIs). Insofar as we want to retain control, we shouldn’t let people base those AIs in regulatory havens while still being able to gain significant influence over western countries.
Okay, but won’t protectionist countries just get outcompeted? Not if they start off with enough power to deter other countries from deploying power-seeking AIs. And right now, the world’s greatest manufacturing power is already fairly protectionist. So if the US moves in that direction too, it seems likely that the combined influence of the US and China will be sufficient to prevent anyone else from “defecting”. The bottleneck is going to be trust between the two superpowers.
(Continued in tweet below.)
3. National conservatism
All of the above is premised on the goal of preserving human interests in a world of much more powerful agents. This is inherently a kind of conservatism, and one which we shouldn’t take for granted. The tech right often uses the language of “winning”, but as I’ve observed before there will increasingly be a large difference between a *country* winning and its *citizens* winning. In the limit, a fully-automated country could flourish economically and politically without actually benefiting any of the humans within it.
National conservatism draws a boundary around a group of people and says “here are the people whose interests we’re primarily looking out for”. As Vance put it, America is a group of people with a shared history and a common future. Lose sight of that, and arguments about efficiency and productivity will end up turning it instead into a staging-ground for the singularity. (Nor can you run a country for the benefit of “all humans”, because then you’re in an adversarial relationship with your own citizens, who rightly want their leaders to prioritize them.)
China’s government has many flaws, but it does get this part right. They are a nation-state run by their own people for their own people. As part of that, they’re not just economically protectionist but also culturally protectionist - blocking foreign ideas from gaining traction on their internet. I don’t think this is a good approach for the West, but I think we should try to develop a non-coercive equivalent: mechanisms by which a nation can have a conversation with itself about what it should value and where it should go, with ideas upweighted when their proponents have “skin in the game”. Otherwise the most eloquent and persuasive influencers will end up just being AIs.
All of these ideas are very high-level, but they give an outline of why I think right-wing politics is best-equipped to deal with the rise of AI. There’s a lot more work to do to flesh them out, though.
Here’s a talk I gave on ideas related to those in point 1.

27.3.2025
We're heading towards a world where, in terms of skills and power, AIs are as far above humans as humans are above animals.
Obviously this has gone very badly for animals. So in a recent talk I ask: what political philosophy could help such a future go well?
47,61K
Johtavat
Rankkaus
Suosikit