David Mears

253Joined Aug 2021

Comments
40

Also, here’s another one, which also has the feature that you can search volunteer profiles there: https://www.impactcolabs.com/

Not sure if you’ve already submitted this to the volunteering opportunities board: https://ea-internships.pory.app/

That page has a link to an airtable form where you can submit this opportunity

In general, you can become aware of these projects by joining the relevant Facebook and Discord groups. Please DM me for links.

Cross-posting a top-level post: AGI Safety Fundamentals programme is contracting a low-code engineer

TL;DR: Help the AGI Safety Fundamentals, Alternative Protein Fundamentals, and other programs by automating our manual work to support larger cohorts of course participants, more frequently.

Register interest here [5 mins, CV not required if you don’t have one].

If I were to guess what the 'disagreement' downvotes were picking up on, it would be this:

I see that as a definition driven by self-interest

Whereas to me, all of the adjectives 'proactive, ambitious, deliberate, goal-directed' are goal-agnostic, such that whether they end up being selfish or selfless depends entirely on what goal 'cartridge' you load into the slot (if you'll forgive the overly florid metaphor).

When I read the original OP that this OP is a response to, I am "reading in" some context or subtext based on the fact I know the author/blogger is an EA; something like "when giving life advice, I'm doing it to help you with your altruistic goals". As a result of that assumption, I take writing that looks like 'tips on how to get more of what you want' to be mainly justified by being about altruistic things you want.

As NinaR said, 'round these parts the word "agentic" doesn't imply self-interest. My own gloss of it would be "doesn't assume someone else is going to take responsibility for a problem, and therefore is more likely to do something about it". For example, if the kitchen at your workplace has no bin ('trashcan'), an agentic person might ask the office manager to get one, or even just order one in that they can get cheaply. Or if you see that the world is neglecting to consider the problem of insect welfare, instead of passively hoping that 'society will get its act together', you might think about what kind of actions would need to be taken by individuals for society to get its act together, and consider doing some of those actions.

Thanks for all you do.

I feel that changing the nature of the Maximum Impact Fund in this way should come with a renaming of the fund, since it is now no longer going all-out on expected value; whereas before it was "maximizing" expected "impact", it's no longer doing that. And many donors have come to expect that the MIF is the go-to for high EV donation, and will not notice this change.

Something like the 'Top Charities Fund' or 'High Impact Fund' flags the fundamental change, and is a bit less misleading.

You’re really sure that developing AGI is impossible

I don’t need to think this in order to think AI is not the top priority. I just need to think it’s hard enough that other risks dominate it. Eg I might think biorisk has a 10% chance of ending everything each century, and that risks from AI are at 5% this century and 10% every century after that. Then if all else is equal, such as tractability, I should work on biorisk.

Load More