Lizka

Content Specialist @ Centre for Effective Altruism
Working (0-5 years experience)
5838Joined Nov 2019

Bio

I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
5

Forum Digest Classics
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
How to use the Forum

Comments
187

Topic Contributions
131

Thanks for the pushback. I agree that a linear model will be importantly wrong, although if you approximate the impact from the conference using the number of connections people report and assume that stays roughly the same, it doesn't seem wild as a first pass. (Please let me know if you disagree!) 

[Half-formed thoughts below.]

On the other hand, I think 10-20% more valuable seems very off to me, especially in this case, given we were not "lowering the bar" for the second group of attendees.  Setting this case aside, I can imagine a world in which someone is very confident in their ability to admit the people who will benefit the most from a conference (and the people who would be most useful for them to meet with), and in this world, you might be able to get 90% of the value with 50% of the size — but I don't really think we're in this world (especially in terms of identifying people who will benefit most from the event). 

I'm not really sure how well people self-sort at conferences, which was a big uncertainty for me when I was thinking about these things more. I do think people will often identify (often with help) some of the people with whom it would be most useful to meet. If people are good at self-sorting (e.g. searching through swapcard and finding the most promising 10-15 meetings), and if those most-useful meetings over the whole conference aren't somehow concentrated on meetings with a small number of nodes, then admitting double the people will likely lead to more than double the impact.[1] If people are not good at self-sorting, though, it seems more likely that we'd get closer to straightforward doubling, I think. (I'm fairly confident that people are better than random, though.)

  1. ^

     It does seem possible that there are some "nodes" in the network — at a very bad first pass, you could imagine that everyone's most valuable meetings are with the speakers. The speakers each meet with lots of people (say, they have lots of time and don't get tired) and would be at the conference in any world (doubling or not). Then the addition of 500 extra people doesn't significantly improve the set of possible meetings for the 500 first attendees, although 500 extra people get to meet with the speakers (which is nearly all that matters in this model). 

    I'm really unsure about the extent to which the "nodes" thing is true (and if it's true I don't really think that "speakers" are the right group), but there's something here that seems like it could be right given what we hear. There's also the added nuance that some nodes are probably in the second group of 500, and also that the size and capacity for meetings of the "nodes" group would matter.

Thank you! 

I also really like the phrase bureaucrat's curse. Here's the relevant passage (in this post): 

As well as the unilateralist’s curse (where the most optimistic decision-maker determines what happens), there’s a risk of falling into what we could call the bureaucrat’s curse,[10] where everyone has a veto over the actions of others; in such a situation, if everyone follows their own best-guesses, then the most pessimistic decision-maker determines what happens. I’ve certainly seen something closer to the bureaucrat’s curse in play: if you’re getting feedback and your plans, and one person voices strong objections, it feels irresponsible to go ahead anyway, even in cases where you should. At its worst, I’ve seen the idea of unilateralism taken as a reason against competition within the EA ecosystem, as if all EA organisations should be monopolies. 

(In a comment, Linch points out that this is a special case of the unilateralist's curse.) I also really like the suggestions below the cited passage — on what we need to do or keep doing to manage risks properly: 

  • Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can
  • Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)
  • Are highly willing to course-correct in response to feedback

(In writing, I think there's something somewhat related to the bureaucrat's curse, which is writing-by-committee, or what Stephen Clare called "death by feedback".)

For the purpose of trying this thread, it would be nice to post questions as "Answers" to this post.[1] Although you're welcome to post a question on the Forum if you think that's better: you can see a selection of those here

Not a stupid question! 

  1. ^

    The post is formatted as a "Question" post, which might have been a mistake on my part, as it means that I'm asking people to post questions in the form of "Answers" to the Question-post, and the terminology is super confusing as a result.

I think there was a tag, but it might have gotten deleted. I made a new one — you should be able to use it now. 

Answer by LizkaOct 05, 202220

Our World in Data charts and YouTube videos (you can see this in this user manual, which is a bit outdated on this front — it doesn't have flashcards and manifold markets, but is a useful reference!)

I also think that the first person to post a question will be performing a public service by breaking the ice!

Thanks so much for writing this post! I agree with everything Vaidehi said

There do seem to be bugs around this post, I'm not really sure what's going on, but I'm flagging it to the rest of the team. I marked this post as "Personal" though — I hope that works as it should! 

(Thanks for trying this!) 

I'm curating this post — thank you so much for writing it. 

I agree with other commenters that replication is extremely precious, and I think this post chooses an excellent work to replicate — something that is quite influential for discussions about whether we should prioritize economic growth or more direct types of global health and wellbeing interventions. (Here's a pretty recent related piece by Lant Pritchett.) I also really appreciate that the conclusion about economic growth seems to rely on three very different but independently strong arguments (straightforward estimation of impact given Easterlin's values, noting that the conclusions are very sensitive to small tweaks in the methodology, and suggesting that GDP interventions might be a better approach to improving wellbeing even if Easterlin's interpretations are accurate). 

Re: the discussion on tractability, I want to note that most problems [seem to] fall within a 100x tractability range (assuming that effort on the problems has ~logarithmic returns, which seems very roughly reasonable for, say, research on economic growth or better global health interventions). ("For a problem to be 10x less tractable than the baseline, it would have to take 10 more doublings (1000x the resources) to solve an expected 10% of the problem. Most problems that can be solved in theory are at least as tractable as this; I think with 1000x the resources, humanity could have way better than 10% chance of starting a Mars colony, solving the Riemann hypothesis, and doing other really difficult things.") If I'm interpreting things correctly, I think this means a more plausible reason other interventions might be more impactful is if they're much more neglected (rather than much more tractable). Alternatively, we should simply not expect them to be more impactful. (Disclaimer: I read the tractability post quite a while back, didn't follow the links in this post, and didn't try very hard to understand the parts that I didn't understand after a first read. I also don't have any proper expertise in economics, so I might be getting things significantly wrong. I'm also writing quickly while tired.)

Finally, for those who like Our World in Data charts (and for those who'd appreciate a reference on what we should expect in terms of the relationship between GDP and measures of happiness) — here's a chart showing self-reported life satisfaction vs GDP per capita in different countries (note that this is different from Easterlin's approach for the paradox, which looks at differences in GDP and happiness within countries over time): 

Here are slides from my "Writing on the Forum" workshop at EAGxBerlin. 

Load More