O

OGTutzauer🔸

Engineering Physics Student @ Lund University
172 karmaJoined Pursuing a graduate degree (e.g. Master's)Lund, Sweden

Bio

Participation
4

I lead Effective Altruism Lund in southern Sweden, while wrapping up my M.Sc. in Engineering Physics specializing in machine learning. I'm a social team player who likes high ceilings and big picture work. Scared of AI, intrigued by biorisk, hopeful about animal welfare. 

My interests outside of EA, in hieroglyphs: 🎸🧙🏼‍♂️🌐💪🏼🎉📚👾🎮✍🏼🛹

Comments
14

I have a post about this sitting in my drafts. I think I'll just delete it and tell people to read this quick take instead. Strong upvote. 

As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same.

Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.

Of course, seeking funding from organizations like OpenPhil remains highly valuable—they've dedicated extensive thought to effective community building. Yet, don't underestimate the power and efficiency of utilizing your intimate knowledge of your group's immediate requirements.

Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.

I'm sad to hear that you'd feel manipulated by my reply to the QALY-doubting response, but I'm very happy and thankful to get the feedback! We do want to show that EA has some useful tools and conclusions, while also being honest and open about what's still being worked on. I'll take this to heart. 

I feel the need to clarify that none of these responses are meant to be "sales-y" or to trick people into joining a movement that doesn't align with their values. My reply was more based on the ideas that we need more skeptics. If they have epistemic (as opposed to ethical) objections, I think it's particularly important to signal that they're invited. My condolences for having gotten such awful advice from whatever organization it was, but that's not how we do things at EA Lund. 

For a more realistic example, I talked to one person who said that they'd focus significantly on homelessness in their own city as well as homelessness in Rwanda, because it's unfair to not divide the resources. They're not doing the most good, because they find it more ethical to divide their resources.

So I think your professor's description is good, but I'm not sure it helps discuss egalitarianism/prioritarianism with laymen in their terms. When I say I'd give everything to Rwanda, I'm answering "what does the most good?" and not "what's the most fair/just?" Nonetheless I'll consider raising this response next time the objection comes up.

That's a mistake, thanks for pointing it out! That final sentence wasn't meant to stay in. That is, I think institutional trust is part of the trunk and not the branches.

I agree with your side point that there are some ideas & tools within EA that many would find useful even while rejecting all of the EA institutions. 

I'm sorry if the title was misleading, that was not my intention. I think you and I have different views on the average forum user's population ethics. If I believed that more people reading this had a totalist (or similar) view, I would have been much more up front about my take not being valid for them. Believing the opposite, I put the conclusion you'd get from non-person-affecting views as a caveat instead. 

That aside, I'd be happy to see the general discourse spell out more that population ethics is a crux for x-risks. I've only gotten - and probably at some points given - the impression that x-risks are similarly important to other cause areas under all population ethics. This runs the risk of baiting people into working on things they logically shouldn't believe to be the most pressing problem. 

On a personal note, I concede that extinction is much worse than 10 billion humans dying. This is however for non-quantitative reasons. Tegmark has said something along the lines of a universe without sapience being terribly boring, and that weighs quite heavily into my judgement of the disutility of extinction. 

Thanks for such an in depth reply! I have two takes on your points but before that I want to give the disclaimer that I'm a mathematician, not a philosopher by training. 

First, we're not saying that the lightcone solution implies we should always save Jones. Indeed, there could still be a large enough number of viewers. What we are saying is this: previously, you could say that for any suffering S Jones is experiencing, there is some number of viewers X whose mild annoyance A would in aggregate be greater than S. What's new here is the upper bound to X, so A*X > S could still be true (and we let Jones suffer), but it can't necessarily be made true for any Y by picking a sufficiently large X. 

As to your point about there being different number of viewers X in different worlds, yep I buy that! I even think it's morally intuitive that if more suffering A*X is caused by saving Jones then we have less reason to do so. This for me isn't a case of moral rules not holding across worlds because the situations are different, but we're still making the same comparison (A*X vs Y). I'll caveat this by saying that I've never thought too hard about moral consistency across worlds. 

I'm not sure I follow. Are you saying that accepting that there is a finite amount of potential suffering in our future would imply x-risk reduction being problematic? 

I buy that. One way of putting it would be to say that if you use a parliamentary method of resolving moral uncertainty, the "non-totalist population ethics rep" and the "non-longtermist rep" should both say that farmed animal welfare as greater in scale than biorisk. Does that seem more useful? 

To throw some numbers in here, point no2 would need for a lot of countries to all decide it's not worth it to fill the funding gap even a little. Let's say there are 50 countries that could (I'd estimate half of them to be in Europe), and they decide not to fund with probability .

The probability that they all decide not too fund is then . If p is something like half a percent, there's a 78% risk of no country filling the funding gap. If all three steps have 78% probability then yeah, we do approach 50% of them all happening. 

Load more