riceissa

I am Issa Rice. https://issarice.com/

Comments

What are your main reservations about identifying as an effective altruist?

For me, I don't think there is a single dominant reason. Some factors that seem relevant are:

  • Moral uncertainty, both at the object-level and regarding metaethics, which makes me uncertain about how altruistic I should be. Forming a community around "let's all be altruists" seems like an epistemic error to me, even though I am interested in figuring out how to do good in the world.
  • On a personal level, not having any close friends who identify as an effective altruist. It feels natural and good to me that a community of people interested in the same things will also tend to develop close personal bonds. The fact that I haven't been able to do this with anyone in the EA community (despite having done so with people outside the community) is an indication that EA isn't "my people".
  • Insufficiently high number of people who I feel truly "get it" or who are actually thinking. I think of most people in the movement as followers or promoters and not even doing an especially good job at it.
  • Generic dislike of labels and having identities. This doesn't explain everything though, because I feel less repulsed by some labels (e.g. I feel less upset about calling myself a "rationalist" than about calling myself an "effective altruist").
Introducing The Nonlinear Fund: AI Safety research, incubation, and funding

How is Nonlinear currently funded, and how does it plan to get funding for the RFPs?

Running an AMA on the EA Forum

Another idea is to set up conditional AMAs, e.g. "I will commit to doing an AMA if at least n people commit to asking questions." This has the benefit of giving each AMA its own time (without competing for attention with other AMAs) while trying to minimize the chance of time waste and embarrassment.

Why "cause area" as the unit of analysis?

That one is linked from Owen's post.

Long-Term Future Fund: Ask Us Anything!

In the April 2020 payout report, Oliver Habryka wrote:

I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).

I'm curious to hear more about this (either from Oliver or any of the other fund managers).

Long-Term Future Fund: Ask Us Anything!

I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?

I think LTFF is doing something valuable by giving people the freedom to not "sell out" to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I'm worried about a situation where receiving a grant from LTFF isn't enough to be sustainable, so that people go back to doing more "safe" things like working in academia or at an established org.

Any thoughts on this topic?

Tiny Probabilities of Vast Utilities: Concluding Arguments

Ok I see, thanks for the clarification! I didn't notice the use of the phrase "the MIRI method", which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).

Tiny Probabilities of Vast Utilities: Concluding Arguments

MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .

The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn't involved in creating the model (although the post author seems to have sent it to MIRI before publishing the post). I wonder if I'm missing something though, or misinterpreting what you wrote.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn't seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don't have time to write the full post.

EA considerations regarding increasing political polarization

I think the forum software hides comments from new users by default. You can see here (and click the "play" button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.

Edit: The comments mentioned above are now visible on this post.

Load More