PA

predictable agent

7 karmaJoined Sep 2021

Comments
4

Two more ideas:

  1. Create an extremely good video as introduction to effective altruism. (Should be convincing and lead to action.)
    1. maybe also create a very good video or article discussing objections against effective altruism (and why they may be questionable, if they are questionable)
  2. Create well-designed T-shirts with (funny) EA, longtermist or AI safety printings, that I would love to walk around with in public and hope that someone asks me about it. (I would prefer if there is no "effective altruism" directly on the T-shirt. Maybe sth into this direction, though I don't like the robot that much, because many people associate AGI and robots far too much, but it is still kind of good.)

Thanks, I think I will suggest some grants sometime the next days. :)

I agree that it is probably hard create such a database in a way that it would be really useful and continuously used and that it may should be implemented by CEA.

(If CEA decides not to create something like that, it would still be interesting for people like me to see the suggestions, even if it is not for the purpose of task-people-matching. ^^)

And thanks for sharing the draft! I think it is helpful input, because I had some similar ideas, I will look into it more thoroughly later. 

(Note: Maybe some of the following projects I suggest already have been done. I haven't thoroughly researched that. If you know that something like I suggest to do has already been done, please comment!)

Some ideas for bounties (or grants) for projects or tasks:

  1. An extremely good introduction to "Why should you care about AI safety?" for people who are not stupid but have no idea of AI. (In my opinion preferably as video, though a good article would also be nice.) (I think of a rather short introduction, like 10min-20min)
  2. An extremely good explanation for "Why is AI safety so hard?" for people who have just read or watched the (future) extremely good introduction to "Why should you care about AI safety?". (For people who have little idea about what AI safety is actually about.) (Should be an easily understandable introduction to the main problems in AI safety.) (I was thinking of something like 15-30 min read or video, though additionally a longer and more detailed version would probably be useful as well.)
  3. A project that tries to understand what the main reasons for are why people reject EA after hearing about it. (through a survey and explicit questioning of people)
  4. (A bit related to 3) A project that examines the question "To which degree is effective altruism innate?" and the related question "How many potential (highly, mid or slightly) EA-engaged people are there in the world?". And perhaps also the related question "What life circumstances or other causes cause people to become effective altruists?
  5. A study that examines what the best way to introduce EA is. (E.g Is it better to not use the term "effective altruism"?;  Is The Drowning Child and the Expanding Circle a good introduction to EA ideas or is it rather deterrent?; For spreading longtermism, should I rather first recommend the Precipice or HPMoR (to spread rationality first)?) (Maybe make it something like a long-term study to which many people throughout the world can contribute.)
  6. Make a good estimation of the likelihood that some individual or a small group can significantly (though often indirectly) raise x-risk (for example by creating a bioengineered pandemic, triggering atomic war, triggering an economic crisis (e.g. through hacking attacks), triggering an AI weapons armsrace, triggering a bad political movement etc.). 

I would also love to see people funded, who are just thinking about how the EA community could coordinate better, how efficiency of research in EA-aligned causes can be increased, how EA should be developed in the future, what possible good (mega-)projects are, how to increase the bottleneck of EA (e.g. how to integrate good leadership into EA).

About 1&2: I don't think that should be done just by someone, but by one or more AI researchers who have an extremely good overview about the whole field of AI safety and are able to explain it well to people without prior knowledge. I think it should be something like by far the best introduction. Something which is clearly the thing you would recommend something who is wondering about what AI safety is.

Those are all quite big and important tasks. I think it would be better to advertise those tasks and reach out to qualified people and then fund interested people so they do those tasks, instead of creating bounties, but bounties could work as well.

Of course, you could also create a bounty like "for every 100k$ you raise for EA, you get 5k$" or so. (Yes just totally convince Elon Musk of EA and become a billionaire xD.) But I'm not sure if that would be a good idea, because there could be some downside risk to the image of EA fundraisers.

Thanks for pointing that out!

I think a disadvantage of bounties is that multiple people may do the same thing, and depending on the task that could be quite good or a lot of wasted effort, so I think in most cases I would prefer grants.

Can you see suggestions that were made to EA funds somewhere?

It seems to me as if you still kind of need to specify who should get the funding when you suggest a grant. I think it would be very good if you could just submit a suggestion for a task that someone should do and we had an overview of suggested tasks or projects that people could see and then quickly apply for funding to do that task.  (Maybe you could fund a bounty or grant for someone who creates such a matching-people-and-funding-to-tasks-system, or perhaps EAFunds should just integrate it.)

(Thinking even bigger, it would probably be nice if you had a good overview what ideas in AI safety had already been tried and what ideas haven't been tried yet but seem promising. Though I'm not sure if that would really be helpful, you should probably ask AI safety researchers how to best improve coordination in AI safety research.)