I am a MIT Ph.D. student working on AI safety and AI governance.
I was a SERI MATS Fellow this summer with Evan Hubinger. I was a Predoctoral Research Fellow in Economics at the Global Priorities Institute and a Grants Consultant at Longview. I read Philosophy, Politics, and Economics at the University of Warwick, led the local EA group, and co-moderated our first fellowship. Previously, I interned at the European Parliament and the Future of Life Institute, working on the EU AI White Paper consultation.
Thanks for writing this. It feels useful to have these diffusion numbers. However, in practice, I am much less interested in "how much longer did it take for these other models to be built?" but "by how much did the release of GPT3 move these projects forward in time?". An answer to the latter question seems to tell us more about what we can do about diffusion and whether publications have a counterfactual effect on timelines. What is your current best guess on the latter question? [sorry if I missed this somewhere]
Here are some reasons to downvote this poll and all the other polls (I have not yet done this). Some people feel very frightened and stressed etc, and for them, it is not yet the time to engage with these polls.
In addition, it feels wrong that justice is ruled by a weird anonymous voting procedure which does not include a bunch of deliberation (and that does not include more people that are specialised in this problem) . EA is not a democratic organisation (for good or bad reasons) - ultimately, this is not how EVF makes decisions. Suppose, someone did X (X is clearly morally wrong), a community does a vote, and more than 50% think it is acceptable, that does not mean the behaviour is acceptable (or even if 80% agree).
The EVF board hired an external group to look into this issue (see their announcement post)- I don't understand why you don't first trust them (you can still disagree if they conclude).
I feel most confused why the therapist is important here. The therapist recs might be a necessary condition but clearly not sufficient. Therapist are not trained in this (it might be that his therapist is an expert in dealing with such situations, but then I expect he would have mentioned this).
Hi Robi, thanks for your response. Are you referring to the endogenous growth models, where additional people have growth effects rather than level effects as in the semi-endogenous growth model (they increase the future growth rate)? or are you referring to historical trends? I am personally not very convinced of both.
"they're another person all eight billion previous people can bounce ideas off of"
This seems to depend on whether people actually have more connections. Even if they have more connections AND you think that research is driven by bouncing off ideas, you might think that this positive effect is smaller than the negative effect of research duplication when the population becomes bigger. But I agree it is plausible that the relevant parameter in the semi-endogneous growth model, lambda, is greater than 1.
I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not.
Hi, thanks for writing this. As others have pointed out I am a bit confused how the conclusion (more diversification in EA careers etc) follows from the assumption (high uncertainty about cause prioritisations.
Thanks for writing this. It resonated with some of my feelings.