703 karmaJoined Jul 2019Working (0-5 years)Oxford, UKcharlottesiegmann.com/



I am a MIT Ph.D. student working on AI safety and AI governance. 

I was a SERI MATS Fellow this summer with Evan Hubinger. I was a Predoctoral Research Fellow in Economics at the Global Priorities Institute and a Grants Consultant at Longview. I read Philosophy, Politics, and Economics at the University of Warwick, led the local EA group, and co-moderated our first fellowship. Previously, I interned at the European Parliament and the Future of Life Institute, working on the EU AI White Paper consultation. 


Topic Contributions

Thanks for writing this. It resonated with some of my feelings.

Thanks for writing this. It feels useful to have these diffusion numbers. However, in practice, I am much less interested in  "how much longer did it take for these other models to be built?" but "by how much did the release of GPT3 move these projects forward in time?". An answer to the latter question seems to tell us more about what we can do about diffusion and whether publications have a counterfactual effect on timelines. What is your current best guess on the latter question? [sorry if I missed this somewhere]

Here are some reasons to downvote this poll and all the other polls (I have not yet done this). Some people feel very frightened and stressed etc, and for them, it is not yet the time to engage with these polls. 

In addition, it feels wrong that justice is ruled by a weird anonymous voting procedure which does not include a bunch of deliberation (and that does not include more people that are specialised in this problem) .  EA is not a democratic organisation (for good or bad reasons) - ultimately, this is not how EVF makes decisions. Suppose, someone did X (X is clearly morally wrong), a community does a vote, and more than 50% think it is acceptable, that does not mean the behaviour is acceptable (or even if 80% agree). 

The EVF board hired an external group to look into this issue (see their announcement post)- I don't understand why you don't first trust them (you can still disagree if they conclude). 

I feel most confused why the  therapist is important here. The therapist recs might be a necessary condition but clearly not sufficient. Therapist are not trained in this (it might be that his therapist is an expert in dealing with such situations, but then I expect he would have mentioned this). 

Hi Robi, thanks for your response. Are you referring to the endogenous growth models, where additional people have growth effects rather than level effects as in the semi-endogenous growth model (they increase the future growth rate)? or are you referring to historical trends? I am personally not very convinced of both.

"they're another person all eight billion previous people can bounce ideas off of"

This seems to depend on whether people actually have more connections. Even if they have more connections AND you think that research is driven by bouncing off ideas, you might think that this positive effect is smaller than the negative effect of research duplication when the population becomes bigger. But I agree it is plausible that the relevant parameter in the semi-endogneous growth model, lambda, is greater than 1. 

I also feel appalled. Thanks for sharing this.

I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not. 

Hi, thanks for writing this. As others have pointed out I am a bit confused how the conclusion (more diversification in EA careers etc) follows from the assumption (high uncertainty about cause prioritisations.

  1. You might think that we should be risk averse with respect to our difference-making, i.e. that the EA community does some good in many worlds. See here a summary post from me which collects the arguments against the "risk averse difference-making" view. One might still justify increased diversification for instrumental reasons (e.g. welcomingness of the community), but I don't think that's what you explicitly argue for.
  2. You might think that updating that we are more uncertain means that we are more likely to change our minds about causes in the future. If we change our minds about priorities in e.g. 2 or 10 years , it is really advantageous if X members of the community already worked in the relevant cause area. Hence, we should spread out.
    1. However, I don't think that this argument works. First, more uncertainty now might also mean more uncertainty later - hence unclear that I should update that it is more likely that we will change our mind
    2. Secondly, if you think that we can resolve that uncertainty and update in the future, then I think this is a reason for people to work as cause prioritisation researchers and not a reason to spread out among more cause areas.

Thanks for your comment

. I believe the things are fixed now. 

Load more