All of Alex Mallen's Comments + Replies

It is unclear in the first figure whether to compare the circles by area or diameter. I believe the default impression is to compare area, which I think is not what was intended and so is misleading.

2
Ariel Simnegar
4mo
Comparing area was intended :) If it's unclear, I can add a note which says the circles should be compared by area.

I'm guessing it would be a good idea to talk to people more skeptical about this project so that you can avoid unilateralist curses. It's not clear how much you've done that already (apart from posting on the forum!).

How long do you expect students to participate?

Too much focus on existing top EA focus areas can lead to community stickiness. If this is just meant as a somewhat quick pipeline to introduce people to the ideas of EA once they've already settled into a field this might be okay. Also most EAs historically have been convinced at a younger age (<30) when they are more flexible.

0
Benjamin Eidam
2y
Thanks for your comment Alex! I think if you have people for 3-5 weeks at least and literally give them a new perspective on the world, a lot of them at least can realisticly consider EA because they simply know how to place it as a "meta" mental model. Which brings me to the answer to your question: "How long do you expect students to participate?" Based on my experience at least 4 weeks. Up to 16 weeks. As i wrote: I borrow concepts that are already working in the real-life context so no need to experiment there.   Cheers Ben

This post by Rohin attempts to address it. If you hold the asymmetry view then you would allocate more resources to [1] causing a new neutral life to come into existence (-1 cent) then later once they exist improve that neutral life (many dollars) than you would to  [2] causing a new happy life to come into existence (-1 cent). They both result in the same world.

In general you can make a dutch booking argument like this whenever your resource allocation doesn't correspond to the gradient of a value function (i.e. the resources should be aimed at improving the state of the world).

9
Anthony DiGiovanni
2y
This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a "neutral" life (without relieving any suffering by doing so). If the reason you don't consider it good to create new lives with more happiness than suffering is that you don't think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can't get Dutch booked this way. See this comment.

Thank you for pointing me to that and getting me to think critically about it. I think  I agree with all the axioms.

a rational agent should act as to maximize expected value of their value function

I think this is misleading. The VNM theorem only says that there exists a function  such that a rational agent's actions maximize . But  does not have to be "their value function."

Consider a scenario in which there are 3 possible outcomes:  = enormous suffering,  = neutral, = mild joy. Let's say m... (read more)

I'm concerned about getting involved in politics on an explicitly EA framework when currently only 6.7% of Americans have heard of EA (https://forum.effectivealtruism.org/posts/qQMLGqe4z95i6kJPE/how-many-people-have-heard-of-effective-altruism). This is because there is serious risk of many people's first impressions of EA to be bad/politicized, with bad consequences for the longterm potential of the movement. This is because political opponents will be incentivized to attack EA directly when attacking a candidate running on an EA platform. If people are e... (read more)

Agreed. By the way, the survey is not representative, and people often say they've heard of things that they have not. I think the true number is an order of magnitude lower than the survey suggests.

To me it seems the main concern is with using expected value maximization, not with longtermism. Rather than being rationally required to take an action with the highest expected value, I think you are probably only rationally required not to take any action resulting in a world that is worse than an alternative at every percentile of the probability distribution. So in this case you would not have to take the bet because at the 0.1st percentile of the probability distribution taking the bet has a lower value than status quo, while at the 99th percentile i... (read more)

2
Kei
2y
Relevant: The von Neumann-Morgenstern utility theorem shows that under certain reasonable seeming  axioms, a rational agent should act as to maximize expected value of their value function: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem There have of course been arguments people have raised against some of the axioms - I think most commonly people argue against axioms 3 and 4 from the link.
3
MichaelStJules
2y
FWIW, stochastic dominance is a bit stronger than you write here, since you can allow A to strictly beat B at only some quantiles, but equality at the rest, and then A dominates B.
2
NunoSempere
2y
I think this is probably wrong, and I view stochastic dominance as a backup decision rule, not as a total replacement for expected value. Some thoughts here.
5
MichaelStJules
2y
(Note: I've made several important additions to this comment within the first ~30 minutes of posting it, plus some more minor edits after.) I think this is an important point, so I've given you a strong upvote. Still, I think total utilitarians aren't rationally required to endorse EV maximization or longtermism, even approximately except under certain other assumptions. Tarsney has also written that stochastic dominance doesn't lead to EV maximization or longtermism under total utilitarianism, if the probabilities (probability differences) are low enough, and has said it's plausible the probabilities are in fact that low (not that he said it's his best guess they're that low). See "The epistemic challenge to longtermism", and especially footnote 41. It's also not clear to me that we shouldn't just ignore background noise that's unaffected by our actions or generally balance other concerns against stochastic dominance, like risk aversion or ambiguity aversion, particularly with respect to the difference one makes, as discussed in "The case for strong longtermism" by Greaves and MacAskill in section 7.5. Greaves and MacAskill do argue that ambiguity aversion with respect to the outcomes doesn't point against existential risk reduction, and if I recall correctly from following citations, that ambiguity aversion with respect to the difference one makes is too agent-relative. On the other hand, using your own precise subjective probabilities to define rational requirement seems pretty agent-relative to me, too. Surely, if the correct ethics is fully agent-neutral, you should be required to do what actually maximizes value among available options, regardless of your own particular beliefs about what's best. Or, at least, precise subjective probabilities seem hard to defend as agent-neutral, when different rational agents could have different beliefs even with access to the same information, due to different priors or because they weigh evidence differently. Plus, wi

Yes, I have a group going now!

2
Chris Leong
2y
That's great!

To what extent are there already similarly dangerous pathogen genomes on the internet? I'm guessing that things like smallpox are less of a worry because we already have a vaccine for them, but if many novel, certified pandemic-grade pathogen genomes are already available then adding more seems significantly less harmful.

2
Likith Govindaiah
2y
Kevin claims there are none at the moment that he’s particularly concerned about (in large part because most such viruses we have developed vaccines / antivirals for).

I was wondering how Rohin tried starting the group. If he was doing it remotely, then it seems like that may have been a factor in why it failed the second time (because it would be hard to form a community). Thanks for suggesting messaging the people who most recently joined the UW EA Facebook group--I didn't think there were any new people, but there are a few!

2
riceissa
2y
He was at UW in person (he was a grad student at UW before he switched his PhD to AI safety and moved back to Berkeley).

I did get in contact with Jessica McCurdy  from CEA, who I plan to talk to soon about getting started and the fall accelerator program, and I just filled out the interest form. I'll keep looking for EAs on Reddit--thanks for the suggestion! I think it would be valuable to have a chat with you as well.

Hey everyone, I'm also new to the forum and to EA as of summer 2021. I found EA mostly through Lex Fridman's old podcast with Will MacAskill, which I watched after being reminded of EA by a friend. Then I read some articles on 80,000 hours and was pretty convinced.

I'm a sophomore computer science student at the University of Washington. I'm currently doing research with UW Applied Math on machine learning for science and engineering. It seems like my most likely career is in research in AI or brain-computer interfacing, but I'm still deciding and have an a... (read more)

2
Chris Leong
2y
Did you reach out to groups@centreforeffectivealtruism.org?

I wonder if the probability L = 90% is an overestimate of the likelihood of lasting indefinitely. It seems reasonable that the regime could end because of a global catastrophe that is not existential, reverting us to preindustrial society.  For example, nuclear war could end regimes if/while there are multiple states, or climate change could cause massive famine. On the other hand, is it reasonable to think that BCI would create so much stability that even the death of a significant proportion of its populace would not be able to end it?