All of Risto Uuk's Comments + Replies

Thank you for the questions. Regarding emotions-based advertisement, you might find our recent EURACTIV (a top EU policy media network) op-ed about AI manipulation relevant and interesting: The EU needs to protect (more) against AI manipulation. In it, we invite EU policymakers to expand the definition of manipulation and also consider societal harms from manipulation in addition to individual psychological and physical harms. And here's a bit longer version of that same op-ed. 

1
brb243
2y
Thank you! Yes, that would be so great if all manipulative techniques are banned, but I would recognize not only targeting people in moments of vulnerability but also 1) using negative, often fear- or/and shame-based, biases and imagery to assume authority, 2) presenting unapproachable images that should (thus) portray authority,[1] 3) physical and body shaming, 4) using sexual appeal in non-sexual contexts and/especially when it can be assumed that the viewer is not interested in such appeal, 5) allusions to physical/personal space intrusion, especially when the advertisement assumes/motivates the assumption of the viewer's vulnerability, 6) hierarchies in entitlement to attention of persons who are not looking to share such based on the reflection of commercial models, 7) manipulative use of statistics and graphs, 8) use of contradictory images and text that motivate decreased enjoyment of close ones, 9) generally demanding attention when viewers are not interested in giving attention, 10) evoking other negative emotions, such as despair, guilt, fear, shame, hatred including self-hatred, decreasing confidence in own worth of enjoyment and respect, and motivating the feeling of limited enjoyment of one's situations, 11) shopping processes that can be understood as betrayal or abuse, 12) normalization or glorification of throwing up and allusions to such in unrelated contexts, 13) onomatopoeic expressions that appeal to impulsive behavior, and other negatively manipulative techniques as those which should be banned or regulated and explained alongside with the ad. From the AI Act, it may be apparent that the EU seeks to do the minimum in commercial regulations to not lose competitiveness in this area (or perhaps since resolving this issue seems somewhat challenging and would require the decisionmakers' admittance of being subject to potentially suboptimal advertisements) and rather focus on updating existing systems, such as those related to administration in vario

Thank you, these are some really big questions! Most of them are beyond what we work on, so I'm happy to leave these to other people in this community and have them guide our own work. For example, the Centre for Long-Term Resilience published the Future Proof report in which they refer to a survey where the median prediction of scientists is that general human-level intelligence will be reached around 35 years from now. 

I'll try to answer the last question about where our opinions might differ. Many academics and policymakers in the EU probably still... (read more)

3
aogara
2y
Thank you for the quick reply! Totally understand the preference to focus on FLI's work and areas of specialty. I've been a bit concerned about too much deference to a perceived consensus of experts on AI timelines among EAs, and have been trying to form my own inside view of these arguments. If anybody has thoughts on the questions above, I'd love to hear them! Right, this sounds like a very important viewpoint for FLI to bring to the table. Policymaking seems like it's often biased towards short term goals at the expense of bigger long run trends.  Have you found enthusiasm for collaboration from people focused on bias, discrimination, fairness, and other alignment problems in currently deployed AI systems? That community seems like a natural ally for the longtermist AI safety community, and I'd be very interested to learn about any work on bridging the gap between the two agendas. 

Thank you, a lot of great questions. In response to question (3), some of our work focuses on EU member states as well. Because we are a small team, our ability to cover many member states is limited, but hopefully, with the new hire we can do a lot more on this front as well. If you know anybody suitable, please let us know. For example, we have engaged with Sweden, Estonia, Belgium, Netherlands, France, and a few other countries. Right now, the Presidency of the Council of the EU is held by France, next up are Czechia and Sweden, so work at the member state level in these countries is definitely important. 

Regarding your 2nd question, I think it is an important argument and it's good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower,... (read more)

Thank you for the questions. I think that the biggest bottleneck right now is that very few people work on the issues we are interested in (listed here). We are trying to contribute to this by hiring a new person, but the problems are vast and there's a lot more room for additional people. Another issue is lack of policy research that would consider the longer-term implications but would at the same time be very practical. We are happy that in addition to the Future of Life Institute, a few other organizations such as Centre for the Governance of AI, Centr... (read more)

8
Risto Uuk
2y
Regarding your 2nd question, I think it is an important argument and it's good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower, like the importance of experimenting in the EU as well as its role in the semiconductor supply chain. For me personally, I am very well-placed to work on EU AI governance compared to this type of work in the US, China, or elsewhere in the world. Even if in absolute terms other regions were more important, considering how neglected this space is, I think EU matters a lot. And many other Europeans would be much better placed to work on this rather than, say, try to become Americans.  

If anyone reading this post thinks that the arguments in favor outweigh the arguments against working on EU AI governance, then consider applying for the EU Policy Analyst role that we are hiring for at the Future of Life Institute: https://futureoflife.org/2022/01/31/eu-policy-analyst/. If you have any questions about the role, you can participate in the AMA we are running: https://forum.effectivealtruism.org/posts/j5xhPbj7ywdv6aEJc/ama-future-of-life-institute-s-eu-team.

Thank you for writing this summary!

I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://artificialintelligenceact.eu/. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you'd like to discuss the proposal or have suggestions for the website. We'd like it to be a good resource for the general public but also for people interested in the regulation more closely. 

Yeah, I feel that too. My daughter is just 1 year and 9 months. We are constantly high-fiving and fist-pumping.

Answer by Risto UukMar 15, 202119
0
0

Because (i) my wife wanted to have a child and I thought it would strenghten our relationship, (ii) I assumed my child was likely to become a happy person and possibly an EA, (iii) I'd potentially have a very close friend for life.

My little dude is only 2 but one of my best mates. Have never had more laughs than as a dad. But, never had more tears either. It's turbulent, but the highs are high.

Existential risks are not something they have worked on before, so my project is a new addition to their portfolio. I didn't mention this but I intend to have a section for other risks depending on space. The reason climate change gets prioritized in the project is that arguably the EU has more of a role to play in climate change initiatives compared to, say, nuclear risks. 

3
MichaelA
3y
Makes sense!  I imagine having climate change feature prominently could also help get buy-in for the whole project, since lots of people already care about climate change and it could be highlighted to them that the same basic reasoning arguably suggests they should care about other existential risks as well. FWIW, I'd guess that the EU could do quite a bit on nuclear risks. But I haven't thought about that question specifically very much yet, and I'd agree that they can tackle a larger fraction of the issue of climate change. 

Thanks for this database! I'm currently working on a project for the Foresight Centre (a think-tank at the Estonian parliament) about existential risks and the EU’s role in reducing them. I cover risks form AI, engineered pandemics, and climate change. For each risk, I discuss possible scenarios, probabilities, and the EUs role. I've found a couple of sources from your database on some of these risks that I hadn't seen before. 

3
MichaelA
3y
That sounds like a really valuable project! Glad to hear this database has been helpful in that. Do you mean that those are the only existential risks the Foresight Centre's/Estonian Parliament's work will cover (rather than just that those are the ones you're covering personally, or the ones that are being covered at the moment)? If so, I'd be interested to hear a bit about why? (Only if you're at liberty to discuss that publicly, of course!) I ask because, while I do think that AI and engineered pandemics are probably the two biggest existential risks, I also think that other risks are noteworthy and probably roughly on par with climate change. I have in mind in particular "unforeseen" risks/technologies, nuclear weapons, nanotechnology, stable and global authoritarianism, and natural pandemics. I'd guess that it'd be hard to get buy-in for discussing nanotechnology, authoritarianism, and maybe unforeseen risks/technologies in the sort of report you're writing, but not too hard to get buy-in for discussing nuclear weapons and natural pandemics.

The same is the case with the effective altruism course at the LSE titled Effective Philanthropy: Ethics and Evidence. The reason for that was that the teacher Luc Bovens moved to work for another institution. I don't know about UCL.

It would also be more informative to assess risks of death from COVID-19. 'Micromorts' normally stand for one-in-a-million chance of death because the word is combined from micro and mortality. If 1000 μCoV were a thousand-in-a-million chance of death, then engaging in activities with such a risk would be quite reckless indeed. That would be about similar to climbing quite high mountains and doing a couple of base-jumps.

I have calculated COVID-19 risks for myself in the context of Estonia where I am currently. My numbers right now are abo... (read more)

4
Misha_Yagudin
4y
I believe it is "borderline reckless" because 1000 μCoV per event = 0.1% Cov per event and their default risk tolerance is 1% per year [another available option is 0.1% per year]. So you can do such events about one once per month [or per year] before exhausting your tolerance. Another question is whether 1% or .1% risk tolerance is reasonable. It might be for some age/health cohorts; or for someone really worried/confused about long-term effects [s.a. chronic fatigue from SARS or some unknown-unknowns]. On the other hand, while being cautious, one shouldn't neglect gradual negative effects on mental health and so on.

"You should not do a PhD just so you can do something else later. Only do a PhD if this is something you would like to do, in itself."

Why do you think this is the case? For example, I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs and that proportion is greater for senior research roles and more academic think-tanks. This does not account for the unmeasurable benefits of PhDs such as being taken more seriously in policy discussions. Isn't it possible that 4-6 years of PhD work gives you more impressive career capital than the same amount of experience progressing from more junior roles to slightly more senior ones?

7
Linda Linsefors
4y
So almost half of them don't. If you want a job at one of those think tanks, I would strongly recommend that you just go straight for that. If you want to do research, then do the research you want to do. If the research you want to do mainly happen at a company or think thank, but not really in academia, go for the company or think tank. There are other ways of getting a PhD degree that does not involve enrolling in a PhD program. In many countries, the only thing that actually matters for getting the degree is to write an defend a PhD thesis which should contain original research done by you. For example if you just keep publishing in academic journals, until your body of work is about the same as can be expected to be done during a PhD (or maybe some more to be on the safe side), you can just put it all in a book, approach a university and ask to defend your work. This may be different in different countries. But universities mostly accept foreign students. So if you can't defend your independent thesis at home, go some where else.

This post was actually published in 2018 for the first time, but for some reason I wasn't able to share the link with some people as it showed up as a draft. I resubmitted it and it has received some interest from the community again.

I think that the longer term evidence right now indicates that the impact of this was lower than the short-term evidence made me anticipate. I expected to have several highly engaged new members in the EA community longer term, but currently it appears that these people are only weakly involved with effective altruism. H... (read more)

Why did you decide to move from Global Priorities Institute to 80,000 Hours?

A number of factors, but the biggest was suitedness to the role. I tend to get a lot of energy from talking to other people. My role at GPI was very independent - at the time I was the only operations person there, and academics tend to work fairly individually on their research. By comparison, my current role involves not just talking to people I advise, but the team is also more collaborative in general (for example, it makes sense for the research team and advising team to collaborate quite a bit because advising calls are both how we get out quite a bi... (read more)

Estonia actually has two local groups, one in Tallinn and the other in Tartu.

8
Neil_Dullaghan
4y
Thanks Risto_Uuk, The survey results only reflect people who answered the survey, and there was only 1 relevant entry for Estonia. The privacy policy covering the survey means we cannot share the names of which local groups responded.

Do you think there's more useful research to be done on this topic? Are there any specific questions you think researchers haven't yet answered sufficiently? What are the gaps in the EA literature on this?

4
MichaelPlant
4y
Hello Risto, Thanks for this. That's a good question. I think it partially depends on whether you agree with the above analysis. If you think it's correct that, when we drill down into it, evaluating problems (aka 'causes') by S, N, and T is just equivalent to evaluating the cost-effectiveness of particular solutions (aka 'interventions') to those problems, then that settles the mystery of what the difference really is between 'cause prioritisation' and 'intervention evaluation' - in short, they are the same thing and we were confused if we thought otherwise. However, if someone thought there was a difference, it would be useful to hear what it is. The further question, if cause prioritisation is just the business of assessing particular solutions to problems, is: what the best ways to go about picking which particular solutions to assess first? Do we just pick them at random? Is there some systemic approach we can use instead? If so, what is it? Previously, we thought we have a two-step method: 1) do cause prioritisation, 2) do intervention evaluation. If they are the same, then we don't seem to have much of a method to use, which feels pretty dissatisfying. FWIW, I feel inclined towards what I call the 'no shortcuts' approach to cause prioritisation: if you want to know how to do the most good, there isn't a 'quick and dirty' way to tell what those problems are, as it were, from 30,000 ft. You've just got to get stuck in and (intuitively) estimate particular different things you could do. I'm not confident that we can really assess things 'at-the-problem-level' without looking at solutions, or that we can appeal to e.g. scale or neglectedness by themselves and expect that to very much work. A problem can be large and neglecteded because its intractable, so can only make progress on cost-effectiveness by getting 'into the weeds' and looking at particular things you can do and evaluating them.

It actually might be more complicated than what you say here, alexherwix. If a research analyst role at the Open Philanthropy Project receives 800+ job applications, then you might reasonably think that it's better for you to continue building a local community even if you were a great candidate for that option.

In addition, for the reasons that you mention, every possible local community builder might be constantly looking for new job options in the EA community making someone who doesn't do that a highly promising candidate. Furthermore, being ... (read more)

1
alexherwix
5y
yeah, you could make the argument that your counterfactual impact in local community building might be higher than working at EA org X... I didn't (mean to) propose anything to contradict that assessment and I agree given the right circumstances. I just meant to mention that people who could reasonably expect to work at EA org X will likely do so as it IS a more prestigious thing to do than community building at the moment and will likely continue to be in the near future. I don't necessarily like this situation, I am just calling out how I see it. I very much agree that community building is a worthwhile opportunity (that's why I am engaging in it myself) and I never said it's easy... it is just less specialized than some other things one would consider to be high-value. I think that's what you allude to in your third paragraph. To argue a little bit more FOR community building, I would propose it's a very useful general skill set to have for any job. It's a lot about project and people management which is quite useful regardless the specific field you want to get into. Thus, I would be quite happy to see a more systematic approach to and support of community building than we generally see at the moment (although that might just be biased and based on my personal experiences in Germany so far).

This is slightly relevant, in a recent 80,000 Hours' blog post they suggest the following for people applying for EA jobs:

We generally encourage people to take an optimistic attitude to their job search and apply for roles they don’t expect to get. Four reasons for this are that, i) the upside of getting hired is typically many times larger than the cost of a job application process itself, ii) many people systematically underestimate themselves, iii) there’s a lot of randomness in these processes, which gives you a chance even if you’re not truly th
... (read more)

You can decide it by asking who wants to be the leader of a particular activity (the way that your group did) as well as inquire what resources and capital people have available to successfully lead that activity. Sometimes people have the motivation to lead activities, but they don't actually have the necessary resources to do it successfully yet.

Agreed on the failure-mode thinking. I guess if you only take the best-case scenario into consideration, then you forget to assess the risks involved. On the other hand, I'm not sure it should be included in this initial brainstorming session or later when a possible activity is selected as a top candidate.

1
SebastianSchmidt
5y
One shouldn't include failure-mode thinking in the brainstorming part. However, while defining the project (prior to voting) it can be useful to talk about the failure-modes. E.g. prior to voting on our project on how to offset one's climate impact we specified that we should be careful about letting it develop into a project which also focused on how people can offset the animal suffering they induce.

So here are some of the main takeaways from this for me:

  • Involve the main volunteers/group members in the strategy development process.
  • Use the strategy template made available by CEA.
  • Share EA Denmark's list of project ideas with other community builders.

We recently had a several-hour strategy meeting. I can attest to that when community members participate in the task of developing a strategy they understand better what's going on and they feel more motivated as they are actually responsible for the vision now. And they can come up with wonder... (read more)

-1
nataleynisso
5y
you forget to assess the risks involved. On the other hand, I'm not sure it should be included in this initial brainstorming session or later when a possible activity is selected as a top candidate.
1
SebastianSchmidt
5y
That seems like some good takeaways. However, I'd expect that other groups (with more resources) can come up with more impactful projects than those you'll find in the project ideas. As for your three-dimensional tool: How do you determine who the leader of a given activity should be? Also, I think it could be useful to include worst-case scenario/failure-mode thinking.

Great overview as always. I think Open Philanthropy Project's Funding for Study and Training Related to AI Policy Careers should be up here as well:

This program aims to provide flexible support for individuals who want to pursue or explore careers in AI policy1 (in industry, government, think tanks, or academia) for the purpose of positively impacting eventual societal outcomes from “transformative AI,” by which we mean potential future AI that precipitates a transition at least as significant as the industrial revolution ...

I think this accusation is uncalled for. There is more statistics in the report and I linked to it, including things like citation impact. But a comprehensive overview of European AI research is, of course, very welcome.

2
stefan.torges
5y
Maybe I misunderstood. What's the point of highlighting only this statistic? It does not seem very representative of the report you're linking to or the overall claim this statistic might support if looked at in isolation. EDIT: I didn't mean to imply intent on your part. Apologies for the unclear language. Edited original comment as well.

For what it's worth, according to Artificial Intelligence Index published in 2018:

Europe has consistently been the largest publisher of AI papers — 28% of AI papers on Scopus in 2017 originated in Europe. Meanwhile, the number of papers published in China increased 150% between 2007 and 2017. This is despite the spike and drop in Chinese papers around 2008.

(I'd post the graphs here, but I don't think images can be inserted into comments.)

My lived experience is that most of the papers I care about (even excluding safety-related papers) come from the US / UK. There are lots of reasons that both of these could be true, but for the sake of improving AGI-related governance, I think my lived experience is a much better measure of the thing we actually care about (which is something like "which region does good AGI-related thinking").

This strikes me as an isolated example of Europe leading on one metric. I plan to write something more comprehensive, but I think just seeing this statistic could create a wrong impression for some people.

(edited to remove accusatory tone)

You can post an image using standard markdown syntax:

![](link to the image)

For example, to insert the above image, I wrote:
![](https://nunosempere.github.io/ea/AI-Europe.png)
1
David_Moss
5y
I think they can (or, at least, it used to be possible to do so). I've done so here and here for example.

Here's an article by 80,000 Hours literally titled "Advice for undergraduates". It does not answer all of your questions, but hopefully it helps a little bit.

2
Jemma
5y
On a related note, Cal Newport's 'How to Win at College' is great, though the advice might be quite similar to that of the 80k guide. I read Newport's book prior to transferring universities and I found it to be very useful.
1
ZacharyRudolph
5y
Thank you, I've actually read that article before. I asked here because there seem to be all kinds of factors which would confound the usefulness of the advice there, e.g. it might be tailored to the average reader/their ideal reader, limitations on what they want to publically advise. I figured responses here might be less fit to the curve and thus more useful since I'm not confident of being on that curve.

William MacAskill says the following in a chapter in The Palgrave Handbook of Philosophy and Public Policy:

As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis. So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good.

But then he cont... (read more)

4
kbog
5y
Normative commitments aren't sufficient to show that something is an ideology. See my comment. Arguably 'science-aligned' is methodological instead but it's very vague and personally I would not include it as part of the definition of EA.

Open Philanthropy Project's link doesn't work.

1
arikr
5y
Thanks, fixed. @AaronGertler it seems there's a bug when pasting a link in between braces ().

Thank you for writing this! This is a useful overview of active groups for me, because I intend to move to London in September to study at LSE and now need to think about ways to engage with the community there.

In addition, what do you think should be updated in Doing Good Better?

2
Evan_Gaensbauer
5y
I was more thinking a new guidebook to EA that is more satisfying to more people in the EA community would be better, since it seems like a lot of people were dissatisfied with the EA Handbook 2.0 release from last year.
1
Evan_Gaensbauer
5y
Well, after reading the article on climate change, I think that's something that should be updated in the book. It also didn't mention AI alignment much, so it could be more, but it doesn't have to be a lot more.

Your link referring to bdixon and climate change leads to Joey's post "Problems with EA representativeness and how to solve it". Can you share the post that discusses how Doing Good Better appears to underrate the degree of warming of climate change?

3
Evan_Gaensbauer
5y
Sorry, here is the link. I will also update the OP. https://forum.effectivealtruism.org/posts/BwDAN9pGbmCYZGbgf/does-climate-change-deserve-more-attention-within-ea

I found the part about philosophers being well-suited to many aspects of EA research especially interesting. You said this:

Contrary to popular stereotypes, philosophers often excel at quantitative thinking. Many philosophy PhDs have an undergraduate background in math or science. For subfields of philosophy like formal epistemology, population ethics, experimental philosophy, decision theory, philosophy of science, and, of course, logic, a strong command of quantitative skills is essential. Even beyond these subfields, quantitative acumen is prized. In an
... (read more)
2
Jason Schukraft
5y
Unfortunately, I don't have any hard data to back up that claim, just extensive anecdotal evidence from roughly seven years interacting with professional philosophers and philosophy graduate students. And my anecdotal evidence skews heavily toward the US, so I'm not in a position to even hazard a guess about the prevalence of philosophy PhDs with STEM backgrounds in continental Europe. Sorry!

Can you expand on 3a and 3b? I guess 3b justifies 3a, but is that all? Watching and discussing a video with your local group appears to me to be more valuable than asking one question at a talk, but I may be missing some important benefits that you are aware. I would also add that these are not mutually exclusive. I have heard that some people struggle to set time to watch talks on their own, that is also something to consider.

4
beth​
5y
3b justifies 3a, as well as that I have a much easier time paying attention to the talk. In video, there is too much temptation to play at 1.5x speed and aim for an approximate understanding. Though I guess watching the video together with other people also helps. As for 3b, in my experience asking questions adds a lot of value, both for yourself as well as for other audience members. The fact that you have a question is a strong indication that the question is good and that other people are wondering the same thing.

You received almost 100 applications as far as I'm aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn't have time to vet them all. What other reasons did you have for rejecting applications?

Hmm, I don't think I am super sure what a good answer to this would look like. Here are some common reasons for why I think a grant was not a good idea to recommend:

  • The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
  • The mainline outcome of the grant was good, but there were potential negative consequences that the applicant did not consider or pro
... (read more)
Realising that attendance and events are just part of a community, and potentially not the most important part

Agreed. Research and study groups, for example, seem to be a lot more useful than events. First and foremost, participants commit to longer term attendance in advance so you don't need to try to persuade them to participate every time. I dislike having to personally invite people to come to events. I assume that they don't care about EA enough if they don't come at a mere FB invitation.

Regarding attendance, we just recently organize... (read more)

1
DavidNash
5y
Online content is generally the amount of people that open or click on an email (but baring in mind that long term, getting more clicks relies on your community trusting you to have content they want to click on rather than clickbait). Occasionally people also send replies saying they value newsletters and when I ask people in person what they value, that sometimes gets mentioned.

If you're a thoughtful American interested in developing expertise and technical abilities in the domain of AI policy, then this may be one of your highest impact options, particularly if you have been to or can get into a top grad school in law, policy, international relations or machine learning. (If you’re not American, working on AI policy may also be a good option, but some of the best long-term positions in the US won’t be open to you.)

What do you think about similar type of work within the European Union? Could it potentially be a high-impact career path for those who are not Americans?

2
Niel_Bowerman
5y
I think working on AI policy in an EU context is also likely to be valuable, however few (if any) of the world's very top AI companies are based in the EU (except DeepMind, which will soon be outside the EU after Brexit). Nonetheless, I think it would be very helpful to more AI policy expertise within an EU context, and if you can contribute to that it could be very valuable. It's worth mentioning that for UK citizens it might be better to focus on British AI policy.

This post increased my interest in visiting the Boston area. Unfortunately, I cannot come to the EAGx this year, but perhaps another time. I'm quite surprised that you'd have the issue of brain drain as the area seems to be a very impressive place with top universities, lots of people interested in EA, and even a few great EA-aligned organizations. Do you have other ideas besides a full-time paid community builder to improve that?

Nice idea. I wrote my bio in third-person like you did even though on my website I have it in first-person: https://ristouuk.com. Usually, I feel weird about the third-person narrative when I'm the one who is talking about me, but it feels right for the forum.

As an application of this model, the Global Priorities Project estimates that research into the neglected tropical diseases with the highest global DALY burden (diarrheal diseases) could be 6x more cost-effective, in terms of DALYs per dollar, than the 80,000 Hours recommended top charities.

What are 80,000 Hours' recommended top charities? I think you mean some other organization here.

It would be nice if someone updated it regularly and had a note about when it was last updated on the top of the page. For example, according to Julia Wise there were 3855 Giving What We Can members at the beginning of 2019, whereas the number here is outdated with 1800+ members.

Let’s face it. Long-termism is not very intuitively compelling to most people when they first hear of it. Not only do you have to think in very consequentialist terms, you also have to be extremely committed to acting and prioritizing on the basis of fairly abstract philosophical arguments. In my view, that’s just not very appealing - sometimes even off-putting - if you’ve never even thought in terms of cost-effectiveness or total-view consequentialism before.

I agree. Because of this, the 2nd edition of the EA handbook doesn't seem appealing at all as a

... (read more)
4
jtm
5y
Hey! Obviously, the list you got is a great place to start and I'm sure your project will be awesome. One thing that the list kind of lacks is focused discussions on one cause area at a time, which we had for existential risks, animal welfare, and global health and development. If you want to make room for deeper dives into each of these topics, it might be a great idea to do a workshop in the beginning of the stipend where you cover a bunch of the essentials (expected value theory, neglectedness, counterfactual thinking), so you don't have to spend whole sessions on them. I would perhaps also recommend picking a different topic than the chapter on conscious consumerism. While I think that MacAskill has a really great point, I think there are more important topics to cover, and you risk turning off people who care deeply about conscious consumerism already. Let me know if you have other questions :)

Thank you for writing this summary!

  • Altruism: Passionate about helping others
  • Effectiveness: Ambitious in their altruism, with a drive to do as much good as they can. Potential to be aligned with the central tenets of EA.
  • Potential: Excited to dedicate their career to doing good or to donate a significant portion of their income to charity
  • Open-mindedness: Open-minded and flexible, eager to update their beliefs in response to persuasive evidence
  • Enthusiasm: Willing and able to commit ~3-4 hours per weekFit: How good a fit are the
... (read more)
3
jtm
5y
Thanks so much, Risto_Uuk, I really appreciate it. I agree that admissions are quite difficult and ultimately we relied on intuition to some extent as well, but I do believe that putting the criteria in explicit terms helps structure the process a bit. Another thing that helps is to be multiple people going through the list of candidates together. :)

I subscribe to CCC's newsletter and these are the latest stories in the newsletters:

  • The climate debate needs less hyperbole and more rationality
  • The media got it wrong on the new US climate report
  • Don't panic over U.N. climate change report
  • Don't blame global warming for hurricane damages
  • The Paris climate treaty fails to fight global warming

I just wanted to provide more context on what they are focusing on.

If you were to organize an effective altruism course around William MacAskill's book Doing Good Better, what additional readings would you give to students to fill in the holes of the book?

This might be slightly off-topic, but you may have some insight into it. If a donor donates money to, for example, global health s/he can find pretty concrete numbers about impact based on GiveWell's estimates or information from specific organizations such as AMF. How can someone donating money to Meta justify those donations quantitatively and via concrete indicators?

1. I prefer "we".

2. I'm not sure what kind of references you are supposed to add here. Should they be accessible to everyone or can books, etc. be included as well? If the latter, then I'd add Daniel Kahneman's book Thinking Fast and Slow to the list. There are good parts about these concepts in the book. (e.g. Kindle version location 4220)

3. To me, it seems that the definitions of "inside view" and "outside view" are not clear enough, whereas the examples are very good. https://www.hybridforecasting.com/ had n... (read more)

1
Aaron Gertler
5y
Thanks for this feedback, especially the suggested definitions. Your thoughts will definitely be incorporated into the final version. (Also, we should probably be linking to Thinking Fast and Slow all over the Concepts site -- that was a useful reminder!)

You didn't mention anything about (a) the risk of becoming less altruistic in the future, (b) increasing your motivation to learn more about effective giving by giving now, and (c) supporting the development of the culture of effective giving. How much the giver learns over time isn't the only consideration. I'm referring to this forum post by listing these other considerations: http://effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/.

2
trammell
5y
That's right: I agree that there are many other considerations one must weigh in deciding when to give. In this post, I only meant to discuss the RPTP consideration, which I hadn't seen spelled out explicitly elsewhere. But thanks for pointing out that this was unclear. I've weakened the last sentence to emphasize the limited scope of this post.

I feel that the book contains too much fluff and even these commandments, despite appearing useful, seem to lack enough specificity to be useful. Does anyone have other book recommendations or guidelines for improving one's forecasting and probabilistic thinking? At the end of the day, it's important to actually practice forecasting and thinking probabilistically, but specific information for how to do that would be useful. E.g. how do you actually determine 40/60 and 45/55 or even 43/57 probabilities?

3
Evan_Gaensbauer
6y
I have a couple more sections of Superforecasting I think are particularly important I intend to reproduce on the EA Forum. Someone told me publicly reproducing large chunks of the whole book on a blog might not go well with either Tetlock or his publishers. If you send me a PM with questions about what parts of the book you're most curious about, I can fill you in privately.

Thanks for putting it on EA Groups Resource Map! I think it'd be better if the link was to the Google Docs document rather than to this forum post, because we might edit it in the future.

If someone can't apply right now due to other commitments, do you expect there to be new roles for generalist research analysts next year as well? What are the best ways one could make oneself a better candidate meanwhile?

4
Holden Karnofsky
6y
There will probably be similar roles in the future, though I can't guarantee that. To become a better candidate, one can accomplish objectively impressive things (especially if they're relevant to effective altruism); create public content that gives a sense for how they think (e.g., a blog); or get to know people in the effective altruism community to increase the odds that one gets a positive & meaningful referral.
Load more