https://applieddivinitystudies.com/2020/09/05/rationality-winning (a)

Excerpt:

So where are all the winners?
The people that jump to mind are Nick Bostrom (Oxford Professor of Philosophy, author), Holden Karnofsky and Elie Hassenfeld (run OpenPhil and GiveWell, directing ~300M in annual donations) and Will MacAskill (Oxford Professor of Philosophy, author).
But somehow that feels like cheating. We know rationalism is a good meme, so it doesn’t seem fair to cite people whose accomplishments are largely built off of convincing someone else that rationalism is important. They’re successful, but at a meta-level, only in the same way Steve Bannon is successful, and to a much lesser extent.

And this, from near the end:

The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse. Talking to non-rationalists, I am perpetually holding myself back from saying "oh yes, that’s just the thing where no one has coherent meta-principles" or "that’s the thing where facts are purpose-dependent". Talking to rationalists is not much better, since it feels less like a free exchange of ideas, and more like an exchange of "have you read post?"
There are some specific areas where rationality might help, like using Yudkowsky’s Inadequate Equilibria to know when it’s plausible to think I have an original insight that is not already "priced into the market", but even here, I’m not convinced these beat out specific knowledge. If you want to start a defensible monopoly, reading about business strategy or startup-specific strategy will probably be more useful than trying to reason about "efficiency" in a totally abstract sense.
And yet, I will continue reading these blogs, and if Slate Star Codex ever releases a new post, I will likely drop whatever I am doing to read it. This has nothing to do with self-improvement or "systematized winning".
It’s solely because weird blogs on the internet make me feel less alone.

32

0
0

Reactions

0
0
Comments10


Sorted by Click to highlight new comments since:

The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more. They are often doing better by their own lights as a result of having learned things from other people interested in EA-ish topics. Typically they aren't yet at the top of their fields but that's unsurprising as most are 25-35.

The rationality community, inasmuch as it doesn't overlap with the EA community, also has plenty of people who are successful by their own lights, but their goals tend to be becoming thinkers and writers who offer the world fresh ideas and a unique perspective on things. That does seems to be the comparative advantage of that group. So then it's not so surprising that we don't see lots of people e.g. getting rich. They mostly aren't trying to. 🤷‍♂️

[I only read the excerpts quoted here, so apologies if this remark is addressed in the full post.]

I think there's likely something about the author's observation, and I appreciate their frankness about why they think they engage with rationalist content. (I'd also guess they're far from alone in acting partly on this motivation.)

However, if we believe (as I think we should) that there is a non-negligible existential risk from AI this century, then the excerpt sounds too negative to me. 

  • While the general idea of AI risk didn't originate with them, my impression is that Yudkowsky and earlier rationalists had a significant counterfactual impact on the state of the AI alignment field. And not just by convincing others of "rationalism" or AI risk worries specifically (though I also don't understand why the author discounts this type of 'winning'), but also by contributing object-level ideas. Even people who today have high-level disagreements with MIRI on AI alignment often engaged with MIRI's ideas, and may have developed their own thoughts partly in reaction against them. While far from clear how large or valuable this impact was, it seems at least plausible to me that without the work by early rationalists, the AI alignment field today wouldn't just be smaller but also worse in terms of the quality of its content.
  • There also arguably are additional 'rationalist winners' behind the "people that jump to mind". To give just one example, note that Holden Karnofsky (who the author named) cited Carl Shulman (arguably an early rationalist, though I don't know if he identifies as such) in particular and various other parts of the rationalist community and rationalist thought more broadly in his document Some Key Ways In Which I've Changed My Mind. This change of mind was arguably worth billions by certain views, and significantly caused by people the author fails to mention.
  • Lastly, even from a very crude perspective that's agnostic about AI issues, going from 'a self-taught blogger' to 'senior researcher at a multi-million dollar research institute significantly inspired by their original ideas' arguably looks pretty impressive.

(Actually, maybe you don't need to believe in AI risk, as similar remarks apply to EA in general: While the momentum from GiveWell and the Oxford community may well have sufficed to get some sort of EA movement off the ground, it seems clear to me that the rationality community had a significant impact on EA's trajectory. Again, it's not obvious but at least plausibly there are some big wins hidden in that story.)

Are these 'winners' rare? Yes, but big wins are rare in general. Are 'rationalist winners' rarer then we'd predict based on some prior distribution of success for some reference population? I don't know. Are there various ways the rationality community could improve to increase its chances of producing winners? Very likely yes, but again I think that's the answer you should suspect in general, and my intuitive guess is that the rationality community tends to be worse-than-typical at some winning-relevant things (e.g. perhaps modeling and engaging in 'political'/power dynamics) and better at others (e.g. perhaps anticipating low-probability catastrophes), and I feel fairly unsure how this comes out on net.

(For disclosure, I say all of this as someone who I suspect among EAs tends to be more skeptical/negative about the rationality community, and certainly is personally somewhat alienated and sometimes annoyed by parts of it.)

I like this comment. To respond to just a small part of it:

And not just by convincing others of "rationalism" or AI risk worries specifically (though I also don't understand why the author discounts this type of 'winning')

I've also only read the excerpt, not the full post. There, the author seems to only exclude/discount as 'winning' convincing others of rationalism, not AI risk worries. 

I had interpreted this exclusion/discounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to 'win' was by making one better at convincing others of rationalism, then that 'win' wouldn't provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme. 

This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.

(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)

But I think that, if rationalism makes people systematically more likely to 'win' in other ways as well, then convincing others of rationalism: 

  • should also be counted as a 'proper win'
  • would be more like someone being genuinely good at running businesses as well as being good at getting money for writing about their good approaches to running businesses, rather than like a pyramid scheme

Might not count as winning in the sense of being extremely rich and successful by conventional standards, but I think people outside the forecasting space underestimates the degree to which superforecasters are disproportionately likely to be rationalist and rationalist adjacent. 

Registering that I think the poll here is likely (~60%?) to end up being >25% for P(interacts with rationality | is a superforecaster), which is way above base rates.

Update: as an empirical matter, I most likely did not predict the poll correctly.

Here's the poll results so far.

This post seems to fail to ask the fundamental question "winning at what?". If you don't want to become a leading politician or entrepeneur, then applying rationality skills obviously won't help you get there.

The EA community (which is distinct from the rationality community, which the author fails to note) clearly has a goal however: doing a lot of good. How much money GiveWell has been able to move to AMF clearly has improved a lot over the past ten years, but as the author says, that only proves they have convinced others of rationality. We still need to check whether deaths from malaria have actually been going down a corresponding amount due to AMF doing more distributions. I am not aware of any investigations of this question.

Some people in the rationalist community likely only have 'understand the world really well' as their goal, which is hard to measure the success of, though better forecasts can be one example. I think the rationality community stocking up on food in February before it was sold out everywhere is a good example of a success, but probably not the sort of shining example the author might be looking for.

If your goal is to have a community where a specific rationalist-ish cluster of people shares ideas, it seems like the rationalist community has done pretty well.

[Edit: redacted for being quickly written, and in retrospective failing to engage with the author's perspective and the rationality community's stated goals]

[This comment is no longer endorsed by its author]Reply

I found Roko's Twitter thread in response interesting, arguing that

  • being very successful requires very high conscientiousness, which is very rare, so no surprise that a small group hasn't seen much of it
  • the rationalist community makes people focus less on the what their social peer groups consider appropriate/desireable, which is key to being supported by them

Personally what comes to mind here, I always felt uneasy about not having a semi-solid grasp of *everything* from the bottom up, and the rationalist project has been great for helping me in that regard.

The question from the title reminds me of Sarah Constantin's 2017 blog post The Craft is not the Community, which I thought had some interesting related observations, analysis, and suggestions. (Though as an outsider of the Bay Area rationalist community I often can't independently assess its accuracy.)

I'm reminded of Romeo's comment about rationality attracting "the walking wounded" on a similar post from a couple years back.

I actually think rationality is doing pretty good all things considered, though I definitely resonate with Applied Divinity Studies' viewpoint. Tsuyoku Naritai!

Curated and popular this week
 ·  · 17m read
 · 
TL;DR Exactly one year after receiving our seed funding upon completion of the Charity Entrepreneurship program, we (Miri and Evan) look back on our first year of operations, discuss our plans for the future, and launch our fundraising for our Year 2 budget. Family Planning could be one of the most cost-effective public health interventions available. Reducing unintended pregnancies lowers maternal mortality, decreases rates of unsafe abortions, and reduces maternal morbidity. Increasing the interval between births lowers under-five mortality. Allowing women to control their reproductive health leads to improved education and a significant increase in their income. Many excellent organisations have laid out the case for Family Planning, most recently GiveWell.[1] In many low and middle income countries, many women who want to delay or prevent their next pregnancy can not access contraceptives due to poor supply chains and high costs. Access to Medicines Initiative (AMI) was incubated by Ambitious Impact’s Charity Entrepreneurship Incubation Program in 2024 with the goal of increasing the availability of contraceptives and other essential medicines.[2] The Problem Maternal mortality is a serious problem in Nigeria. Globally, almost 28.5% of all maternal deaths occur in Nigeria. This is driven by Nigeria’s staggeringly high maternal mortality rate of 1,047 deaths per 100,000 live births, the third highest in the world. To illustrate the magnitude, for the U.K., this number is 8 deaths per 100,000 live births.   While there are many contributing factors, 29% of pregnancies in Nigeria are unintended. 6 out of 10 women of reproductive age in Nigeria have an unmet need for contraception, and fulfilling these needs would likely prevent almost 11,000 maternal deaths per year. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three do
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 1m read
 · 
Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world. Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation. Learn more about our advising program and apply here. Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions. Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.