All of ZacharyRudolph's Comments + Replies

Hey, Zack from XLab here. I'd be happy to provide a couple sentence feedback on your application if you send me an email. 

The most common reasons for rejection before an interview were things like no indication of having US citizenship or student visa, ChatGPT-seeming responses, responses to the exercise that didn't clearly and compellingly indicate how it was relevant for global catastrophic risk mitigation, or lack of clarity on how mission aligned the applicant was.

We appreciate the feedback, though.      

1
Ávila Carmesí
3d
This is even more puzzling to me now, because I think I clearly satisfied all of these (I looked back over my responses, which I saved in a GDoc). But thank you for the offer! If I get over my strong desire for anonymity I'll be sure to reach out. Edit: I say "clearly" not to add emphasis to my response (I didn't mean for it to sound contrarian), but because these particular criteria seem easy to judge: they're mostly not "how good are you at X"  but rather "have you done X."

Strong downvoted for hostile tone, e.g. "In a similar vein to my previous comment, I'd be curious where you're getting your data from, and would love if you could publicise the survey you must undoubtedly have done to come to these conclusions, since you are not yourself at Atlas fellow!"


 

Strongly agree here. Simply engaging with the community seems far better than silence. I think the object level details of FTX are less important than making the community not feel like it has been thrown to the wolves. 

I appreciate this post, but pretty strongly disagree. The EA I've experienced seems to be at most a loose but mutually supportive coalition motivated by trying to most effectively do good in the world. It seems pretty far from being a monolith or from having unaccountable leaders setting some agenda. 

While there certainly things I don't love such as treating EAGs as mostly opportunities to hang out and some things like MacAskill's seemingly very expensive and opaque book press tour, your recommendations seem like they would mostly hinder efforts to ad... (read more)

Writing since I haven't seen this mentioned elsewhere, but it seems like it might be a good idea to  do (and announce that you are doing ) a rapid evaluation of grantee organizations that received a majority of their funding from FF in order to provide emergency funding to the most promising in order to avoid loss of institutions. If this is something OP plans on doing, it should  do so quickly and unambiguously. 

I'm imagining something like a potentially important org has lost its funding and employees will soon begin looking for and accept... (read more)

I ran the UChicago x-risk fellowship this summer (we'd already started by the time I learned there was a joint ERI survey so decided to stick with our original survey form). 

I just wanted to note that, for the fellows who weren't previously aware of x-risk, we observed a dramatic increase in how important fellows thought x-risk work was and their reported familiarity with x-risk. As well, many indicated in the written responses an intention to work on x-risk related topics in the future where they previously hadn't when responding to the same question. We exclusively advertised to UChicago students for this iteration and about 2/3 of our fellows were new to EA/x-risk.  

3
Nandini Shiralkar
2y
Thanks for this comment - if you do run the UChicago fellowship again, we should definitely coordinate on joint impact assessment surveys. Your finding about less x-risk-aware fellows becoming dramatically more so is very promising. It is also somewhat different from what the surveys show; I imagine multiple factors would affect reported familiarity with x-risk (e.g., types of events you ran, nature of the projects, etc.), and I would be keen to discuss this further with you over a call at some point.  To put this in context for CERI, we received 650+ applications  for ~24 places, so we might have filtered for prior x-risk engagement to a greater extent than you did. We probably also had different theories of change, and it will be interesting to compare our approaches.

A few questions mostly not relevant to me: 

i) If I imagine I'm still leading a student group, a few things come to mind: 

  • What does a full time equivalent mean? For instance, I'm skeptical that most undergrads are capable of putting in a full 40 hours/week, but (a) the part-time organizer option is not emphasized other than in passing to say compensation would be pro-rated and (b) the part time hours they do put in have a higher than normal opportunity cost than for an organizer not actively taking courses. 
  • How should I know if I'm a good fit
... (read more)
3
abergal
2y
i) 1. “Full-time-equivalent” is intended to mean “if you were working full-time, this is how much funding you would receive”. The fellowship is intended for people working significantly less than full-time, and most of our grants have been for 15 hours per week of organizer time or less. I definitely don’t expect undergraduates to be organizing for 40 hours per week. I think our page doesn’t make this clear enough early on, thanks for flagging it– I’ll make some changes to try and make this clearer.   2. I think anyone who’s doing student organizing for more than 5 hours per semester should strongly consider applying. I’m sympathetic to people feeling weird about this, but want to emphasize that I think people should consider applying even if they would have volunteered to do the same activities, for two reasons: 1. I think giving people funding generally causes them to do higher-quality work. 2. I think receiving funding as an organizer makes it clearer to others that we value this work and that you don’t have to make huge sacrifices to do it, which makes it more likely that other people consider student organizing work.   3. We’re up for funding any number of organizers per group– in the case you described, I would encourage all the organizers to apply. (We also let group leaders ask for funding for organizers working less than 10 hours per week in their own applications. If two of the organizers were working 10 hours per week or less, it might be faster for one organizer to just include them on their application.)   ii) 1. (Let me know if I’m answering your question here, it’s possible I’ve misunderstood it.) I think it’s ultimately up to the person on what they want to do– I think the fellowship will generally allow more freedom than funding for a specific project, come with more benefits (see our program page), and would probably pay a higher rate in terms of personal compensation than many other funding opportuni
3
Minh Nguyen
2y
Yeah, agree with this as a student. I'm very keen on applying, but I have no reference for what the expectations are. I get that the Fellowship is flexible by design, but a point of reference (such as a hypothetical or real life example) would really help. WRT to your second point, I actually dont think paying is as big of a deal as people are implying. I worked with very competent national-level student organisers who were willing to put in wayyyyyyyyyy more than 40 hours a week for their causes and even get arrested or forgo prestigious awards/school for them. However, the types of people who are active in this way are also mindful of academic and career expectations. It's slightly taboo to treat social causes as sources of money, but I find that young people are forced to make this decision eventually. Society tells them any level of societal impact is subordinate to any level of career progression, so it's practically inevitable to lose competent organisers who take the first part time job that they're way overqualified for in practice. I've seen it happen plenty. This of course changes the way you approach recruitment, but IMO that's a good (and not entirely different) problem from what you had relying solely on volunteers.

Funding private versions of Longtermist Political Institutions to lay groundwork for government versions

Some of the seemingly most promising and tractable ways to reduce short-termist incentives for legislators are Posterity Impact Assessments (PIA) and Futures Assemblies (see Tyler John's work). But, it isn't clear just how PIAs would actually  work, e.g. what would qualify as an appropriate triggering mechanism, what evaluatory approaches would be employed to judge policies, how far into the future policies can be evaluated. It seems like it would b... (read more)

Strong upvoted, I made a graph with it for a paper I intend to use for my summer research project and quickly found other papers I was unaware of which I expect will be helpful.  

4
MaxRa
3y
Same here, thanks a lot for the post! Would be really cool if this leads to new connections in the growing field of longtermist academia.
4
MichaelPlant
3y
Yes, there is some overlap here, certainly. OPP has, I undestand it, worked on drug decriminalisation, cannabis legalisation, and prison reform, all within the US. What we might call 'global drug legalisation' goes further with respected to drug policy reform (legal, regulated markets for all drugs + global scope, rather than then US) but it also wouldn't cover non-drug related prison reforms.

That 11,000 children died yesterday, will die today and are going to die tomorrow from preventable causes. (I'm not sure if that number is correct, but it's the one that comes to mind most readily.)

TLDR: Very helpful post. Do you have any rough thoughts on how someone would pursue moral weighing research?

Wanted to say, first of all, that I found this post really helpful in helping crystalize some thoughts I've had for a while.  I've spent about a year researching population axiologies (admittedly at the undergrad level) and have concluded that something like a critical level utilitarian view is close enough to a correct view that there's not much left to say. So, in trying to figure out where to go from there (and especially whether to pursue a ... (read more)

3
Joe_Carlsmith
3y
Glad to hear you found it helpful. Unfortunately, I don't think I have a lot to add at the moment re: how to actually pursue moral weighting research, beyond what I gestured at in the post (e.g., trying to solicit lots of your own/other people's intuitions across lots of cases, trying to make them consistent,  that kind of thing). Re: articles/papers/posts, you could also take a look at GiveWell's process here, and the moral weight post from Luke Muelhauser I mentioned has a few references at the end that might be helpful (though most of them I haven't engaged with myself). I'll also add, FWIW, that I actually think the central point in the post most applicable outside of the EA community than inside it, as I think of EA as fairly "basic-set oriented" (though there are definitely some questions in EA where weightings matter).

I'm mostly using "person" to be a stand in for that thing in virtue of which something has rights or whatever. So if preference satisfaction turns out to be the person-making feature, then having the ability to have preferences satisfied is just what it is to be a person. In which case, not appropriately considering such a trait in non-humans would be prima facie wrong (and possibly arbitrary).

1
MichaelStJules
5y
I agree, but I think it goes a bit further: if preference satisfaction and subjective wellbeing (including suffering and happiness/pleasure) don't matter in themselves for a particular nonhuman animal with the capacity for either, how can they matter in themselves for anyone at all, including any human? I think a theory that does not promote the preference satisfaction or the subjective wellbeing as an end in itself for the individual is far too implausible. I suppose this is a statement of a special case of the equal consideration of equal interests.

I'm familiar with the general argument, but I find it persuasive in the other direction. That is, I find it plausible that there are human animals for whom personhood fails to pertain, so ~(2). [Disclaimer: I'm not making any further claim to know what sort of humans those might be nor even that coming to know the fact of the matter in a given case is within our powers.] I don't know if consciousness is the right feature, but I worry that my intuitive judgements on these sorts of features are ad hoc (and will just pick out whatever group I a... (read more)

2
MichaelStJules
5y
I think if you decide what we should promote in a human for its own sake (and there could be multiple such values), then you'd need to explain why it isn't worth promoting in nonhumans. For example, if preference satisfaction matters in itself for a human, then why does the presence or absence of a given property in another animal imply that it does not matter for that animal? For example, why would the absence of personhood, however you want to define it, mean the preferences of an animal don't matter, if they still have preferences? In what way is personhood relevant and nonarbitrary where say skin colour is not? Like "preferences matter, but only if X". The "but only if X" needs to be justified, or else it's arbitrary, and anyone can put anything there. I see personhood as binary, but also graded. You can be a person or not, and if you are one, you may have the qualities that define personhood to a greater or lesser degree. If you're interested in some more reading defending the case for the consideration of the interests of animals along similar lines, here are a few papers: https://philpapers.org/rec/HORWTC-3 https://stijnbruers.wordpress.com/2018/12/13/speciesism-arbitrariness-and-moral-illusions/amp/

Yes! It's much more conducive to conversation now, and I've changed my vote accordingly.


To actually engage with your question: I personally find (1) to be the most motivating reason to adopt a more vegetarian diet since I'm more compelled by the idea that my actions might be harming other persons. Regardless, (1) and (2) are both grounded in the empirical observations. (and both of which are seriously questionable in how much of a difference they make in the individual case: see this and the number of confounding factors in veg diets causin... (read more)

2
MichaelStJules
5y
I think the best explanation for the moral significance of humans is consciousness. Conscious individuals (and those who have been and can again be conscious) matter because what happens to them matters to them. They have preferences and positive and negative experiences. On the other hand, (1) something that is intelligent (or has any other property) but could never be conscious doesn't matter in itself, while (2) a human who is conscious but not intelligent (or any other property) would still matter in themself. I think most would agree with (2) here (but probably not (1)), and we can use it to defend the moral significance of nonhuman animals, because the category "human" is not in itself morally relevant. Are you familiar with the argument from species overlap? https://www.animal-ethics.org/argument-species-overlap/

"(3) The ethical argument: killing or abusing an animal for culinary enjoyment is morally unsound"

I'm understanding abuse as being wrong by definition, a la how murder is by definition a wrongful killing. (3) seems to transparently be a case of arguing that something that is wrong is thus wrong. But, I agree, this by itself wouldn't warrant downvoting so much as how the generally dismissive tone of the writing came off as assuming some moral high ground, e.g. "to accept that this being with no identity, little conceivable intellect... (read more)

5
Tihitina
5y
I see what you mean and I've made some significant changes (let me know if you don't think they are significant enough). But I want to make it clear that I am not claiming neutrality on the issue-I am trying to troubleshoot why one side of the argument is not being received. That being said, I don't want my position to distract or deter people from help in troubleshooting so I am grateful you said something.

Down voted for question begging in the way you phrased the "ethical argument," and descriptions like "the mere desire of taste." [Edit: I changed my vote based on changes made.]

4
MichaelStJules
5y
Unless the post has been edited, I don't see this as necessarily question begging, although I can also see why you might think that. My reading is that the claim is assumed to be true, and the post is about how to best convince people of it (or to become more empathetic) in practice, which need not be through a logical argument. It's not about proving the claim. It could be that making it easier for people to avoid animal products is a way to convince them (or the next generation) of the claim. Another way might be getting them to interact with or learn more about animals and their personalities.

In that case, it seems plausible that you (and your coworkers) will do more and better work if you're not just ascetically grinding away for decades (and if they aren't spending time around someone like that). Perhaps, a good next step is to shadow/intern with/talk to people currently doing these jobs to learn what they look like day to day?

I don't think I can give much specific advice, but it doesn't seem like you're putting much of a weight on what you want to do. For instance, it seems like you're somewhat disappointed that 80k advised against working in AI ethics. If so, I'd suggest maybe applying anyway or considering good programs not in the top 10 (most school rankings seem to be fairly arbitrary in my experience anyway) with the knowledge that you might have to be a little more self-motivated to do "top 10" quality work.

Alternatively, it might be the... (read more)

1
Nathan Young
5y
I suppose I'm not putting much weight on it, other than what is required to keep me working at a problem for the long term. The issue there is that I don't know what working at many of these jobs will be like... In terms of desires, I would like most of all to have a legitimate ethical system. I value that more than my own wellbeing and my own desires. So I don't really care what I want other than instrumentally. I do thinks I *want* on my own time, whereas I think for my career I'd like to maximise as much as I can. At least I think so - it's hard to know what you really want, right? Perhaps I'll end up justifying what I want to do anyway. I suppose this process at least stops me making significantly non-maximal choices.

I'm not sure I understand your objection, but I feel like I should clarify that I'm not endorsing consequentialism as a sort of moral criterion (that is, the thing in virtue of which something is right or wrong) so much as I take the "effective" part of effective altruism to imply using some sort nonmoral consequentialist reasoning. As far as I understand (which isn't far), a Catholic moral framework would still allow for some sort of moral quantification (that some acts are more good than others or are good to a greater degree), e... (read more)

You're right. What I was trying to get at was that I presume Catholics would start with different answers to axiological questions like "what is the most basic good?". Where I might offer a welfarist answer, the Church might say say "a closeness to God" (I'm not confident in that). Thus, if a Catholic altruist applies the "effective" element of EA reasoning, the way to do the most good in the world might end up looking like aggressive evangelism in order to save the most souls. And that if we're trying to convin... (read more)

Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools


Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this man... (read more)

I've spent some time seriously trying to convince a devout Catholic friend of mine about EA. The problem, as far as I can tell, is that EA and the Church have value systems that are almost directly at odds. I mean, that if you take seriously their value system, the rational course of action isn't EA. At least, not in the manner meant here.

My understanding: Essentially, the Church already has an entrenched long-termist view. It's just that the hugely disvaluable outcome is a soul or souls spending eternity in hell (or however long in purgato... (read more)

5
Liam_Donovan
5y
I'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

I started quantitatively "upskilling" almost a year ago exactly after eschewing math classes for.. a while. I spent this past academic year taking the calc series. Now working through MITOpenCourseware's multivariable this summer to test out of it when I get to AC.

Contingent on testing out, it should only be two math classes/semester to meet the requirements.


Do you recall which Facebook group/page? I searched the "Effective Altruism" group for keywords like major/college but didn't find anything.

Thanks for the class suggestion. I'll look into what they offer on that.

2
DavidNash
5y
It is probably this career discussion one.
2
Kirsten
5y
It might have been the career advice group, but I'm not sure.

Thank you, I've actually read that article before. I asked here because there seem to be all kinds of factors which would confound the usefulness of the advice there, e.g. it might be tailored to the average reader/their ideal reader, limitations on what they want to publically advise.

I figured responses here might be less fit to the curve and thus more useful since I'm not confident of being on that curve.