All of L Rudolf L's Comments + Replies

(A) Call this "Request For Researchers" (RFR). OpenPhil has tried a more general version of this in the form of the Century Fellowship, but they discontinued this. That in turn is a Thiel Fellowship clone, like several other programs (e.g. Magnificent Grants). The early years of the Thiel Fellowship show that this can work, but I think it's hard to do well, and it does not seem like OpenPhil wants to keep trying.

(B) I think it would be great for some people to get support for multiple years. PhDs work like this, and good research can be hard to do over a s... (read more)

Yes, letting them specifically set a distribution, especially as this was implicitly done anyways in the data analysis, would have been better. We'd want to normalise this somehow, either by trusting and/or checking that it's a plausible distribution (i.e. sums to 1), or by just letting them rate things on a scale of 1-10 and then getting an implied "distribution" from that.

I agree that this is confusing. Also note:

 Interestingly, the increase in perceived comfort with entrepreneurial projects is larger for every org than that for research. Perhaps the (mostly young) fellows generally just get slightly more comfortable with every type of thing as they gain experience.

However, this is additional evidence that ERI programs are not increasing fellows' self-perceived comfort with research any more than they increase fellows' comfort with anything. It would be interesting to see if mentors of fellows think they have improved

... (read more)
1
Sam Clarke
2y
Cool, makes sense. Agreed. Asking mentors seems like the easiest thing to do here, in the first instance.

For "virtual/intellectual hub", the central example in my mind was the EA Forum, and more generally the way in which there's a web of links (both literal hyperlinks and vaguer things) between the Forum, EA-relevant blogs, work put out by EA orgs, etc. Specifically in the sense that if you stumble across and properly engage with one bit of it, e.g. an EA blog post on wild animal suffering, then there's a high (I'd guess?)  chance you'll soon see a lot of other stuff too, like being aware of centralised infrastructure like the Forum and 80k advising, an... (read more)

I mentioned the danger of bringing in people mostly driven by personal gain (though very briefly). I think your point about niche weirdo groups finding some types of coordination and trust very easy is underrated.  As other posts point out the transition to positive personal incentives to do EA stuff is a new thing that will cause some problems, and it's unclear what to do about it  (though as that post also says, "EA purity" tests are probably a bad idea).

I think the maximally-ambitious view of the EA Schelling point is one that attracts anyone ... (read more)

I agree that in practice x-risk involves different types of work and people than e.g. global poverty or animal welfare. I also agree that there is a danger of x-risk / long-termism cannibalizing the rest of the movement, and this might easily lead to bad-on-net things like effectively trading large amounts of non-x-risk work for very little x-risk / long-termist work (because the x-risk people would have done found their work anyway had x-risk been a smaller fraction of the movement, but as a consequence of x-risk preeminence a lot of other people are not... (read more)

There is currently an active cofounder matching process going on for an organisation to do this, expected to finish in late mid-June and with work starting at the latest a month or two later. Feel free to DM me or Marc-Everin Carauleanu (who independently submitted this idea to the FTX FF idea competition) if you want to know more.


Anything concrete about the exact nature of what service alignment researchers most need, how much this problem is estimated to block progress on alignment, pros and cons of existing orgs each having their own internal service fo... (read more)

I spoke with Yonatan at EAGx Oxford. Yonatan was very good at drilling down to the key uncertainties and decision points.

The most valuable thing was that he really understood the core "make something that people really want" lesson for startups. I thought I understood this (and at least on some abstract level did), but after talking with Yonatan I now have a much stronger model of what it actually takes to make sure you're doing this in the real world, and a much better idea of what the key steps in a plan between finding a problem and starting a company around it should be.

New academic publishing system

Research that will help us improve, Epistemic Institutions, Empowering Exceptional People

It is well-known that the incentive structure for academic publishing is messed up. Changing publish-or-perish incentives is hard. However, one particular broken thing is that some journals operate on a model where they rent out their prestige to both authors (who pay to have their works accepted) and readers (who pay to read), extracting money from both while providing little value except their brand. This seems like a situation that coul... (read more)

Regular prizes/awards for EA art

Effective Altruism

Works of art (e.g. stories, music, visual art) can be a major force inspiring people to do something or care about something. Prizes can directly lead to work (see for example the creative writing contest), but might also have an even bigger role in defining and promoting some type of work or some quality in works. Creating a (for example) annual prize/award scheme might go a long way towards defining and promoting an EA-aligned genre (consider how the existence of Hugo and Nebula awards helps define and pr... (read more)

Prosocial social platforms

Epistemic institutions, movement-building, economic growth

The existing set of social media platforms is not particularly diverse, and existing platforms also often create negative externalities: reducing productive work hours, plausibly lowering epistemic standards, and increasing signalling/credentialism (by making easily legible credentials more important, and in some cases reducing the dimensionality of competition, e.g. LinkedIn reducing people to their most recent jobs and place of study, again making the competition for cred... (read more)

Contests like this seem to generate great content!

Meta note: is there some systematic way to discover and hear about EA forum contests/prizes? My experience is that despite checking the forum front page fairly often, usually when the winning entries show up on the front page. Some page on the forum listing all prizes would be useful – does this exist?

5
Gavin
2y
There's the "prize" tag. Any user can tag posts (or suggest new tags actually).

I broadly agree with the some of the other commenters. The goals of the EA Forum are different from those of Twitter, Reddit, and Facebook. There may well be a case for more audiovisual, engagement-optimized EA content, but moving the forum in the direction of engagement-optimized visually-flashy internet platforms seems like a mistake (especially because such content can be hosted on platforms optimized for it, as RyanCarey suggests in his comment, while maintaining the EA Forum as one of the rare sites based on long-form text).

In terms of specific things... (read more)