All of Robi Rahman🔸's Comments + Replies

3
Christoph Hartmann 🔸
Yes, https://donethat.ai/download

Why do you think global health is no longer neglected?

1
jenn
I don't think global health is no longer neglected. However, I'm no longer fully confident that donating to GiveWell is the most effective way to support human welfare, due to (very positive) infrastructure shifts where the most effective charities in this space get some sort of institutional backstop. While I acknowledge that it is not actually literally a 1:1 substitution, I think it's reasonable to model this as a bit of a handicap[1] on effectiveness when I donate to the EA endorsed charities. Further, GiveWell's current 8x baseline does not seem to me to be that high of a bar, and I suspect there are many more charities and interventions that are neglected by EAs and are possibly more useful for me to fund as they have no institutional backstops. When I combine these facts, it seems to me like there's a reasonable chance that... the same way that EA treated the rest of the philanthropic landscape "adversarially" when thinking about what to fund and avoided the overcrowded areas, perhaps it might make sense for at least a small contingent of people to start treating EA "adversarially" in the same way. Does that make sense? 1. ^ I don't know what the size of this handicap is, I was roughly modelling it as 0.5xing my donation, but the other comments provide some evidence that it's much smaller than I think it is. But I'm still not entirely sure and there isn't good information on this. One thing I would like to do is to figure out what this actual number is.

I've just noticed that the OBBB Act contains a "no tax on overtime" provision, exempting extra overtime pay up to a deduction of $12,500, for tax years 2025-2028. If you, like me, are indifferent between 40-hour workweeks and alternating 32- and 48-hour workweeks, you can get a pretty good extra tax deduction. This can be as easy as working one weekend day every 2 weeks and taking a 3-day weekend the following week. (That's an upper bound on the difficulty! Depending on your schedule and preferences there are probably even easier ways.) Unfortunately this only works for hourly, not salaried, employees.

Thank you very much, I hadn't seen that the moral parliament calculator had implemented all of those.

Moral Marketplace strikes me as quite dubious in the context of allocating a single person's donations, though I'm not sure it's totally illogical.

Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be ver... (read more)

  1. I'm definitely not assuming the my-favorite-theory rule.
  2. I agree that what I'm describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you don't use it.
  3. Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charity - I don't see any moral trade the party with less credence/voting power can offer the larger party not to just override them. For parliaments with 3+ vie
... (read more)
3
groundsloth
I don't know how philosophically sound they are, but the following rules, taken from the RP moral parliament tool, would end up splitting donations among multiple causes: * Maximize Minimum; "Sometimes termed the 'Rawlsian Social Welfare Function', this method maximizes the payoff for the least-satisfied worldview. This method treats utilities for all worldviews as if they fall on the same scale, despite the fact that some worldviews see more avenues for value than others. The number of parliamentarians assigned to each worldview doesn't matter because the least satisfied parliamentarian is decisive." * Moral Marketplace: "This method gives each parliamentarian a slice of the budget to allocate as they each see fit, then combines each's chosen allocation into one shared portfolio. This process is relatively insensitive to considerations of decreasing cost-effectiveness. For more formal details, see this paper." There are a few other other voting/bargaining style views they have that can also lead to splitting. I don't really have anything intelligent to say about whether or not it makes sense to apply these rules for individual donations, or whether these rules make sense at all, but I thought they were worth mentioning.

Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.

4
groundsloth
You seem to be assuming a maximize-expected-choiceworthiness or a my-favorite-theory rule for dealing with moral uncertainty. There are other plausible rules, such as a moral parliament model, which could endorse splitting.

Moral uncertainty is completely irrelevant at the level of individual donors.

2
groundsloth
Why would this be? For example, could not an individual donor be uncertain of the moral status of animals and therefore morally uncertain about the relative value of donations to an animal welfare charity compared to a human welfare one?

Can you give examples of "adversarial" altruistic actions? Like protesting against ICE to help immigrants? Getting CEOs fired to improve what their corporations do?

1
RedCat
FTX
4
NickLaing
Animal welfare corporate campaigns often are adversarial to some extent, heavily EA funded. AI safety stuff sometimes is too.
5
Andrew Roxby
I think I was envisioning the debate as something like 1) Do these sets (the sets of altruistic and adversarial actions) occasionally intersect? 2) Does that have any implications for EA as a movement?  But to answer your question, I think a paradigmatic example for the purposes of debating the topic would be the military intervention in and defeat of an openly genocidal and expansionist nation-state; i.e., something requiring complex, sophisticated adversarial action, up to and including deadly force, assuming that the primary motivations for the defeat of said nation state were the prevention of catastrophic and unspeakable harm. Exploring what the set of altruistic adversarial actions might look like at various scales and in various instances could potentially be a generative part of the debate. 

By "greater threat to AI safety" you mean it's a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/government (like this).

3
Dylan Richardson
I mean all of the above. I don't want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Musk's willingness to directly impose his personal values, not just current harms.  Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek aren't. So let's say just Open AI.  I would be very surprised if this doesn't split discussion at least 60/40.

What is positivism and what are some examples of non-positivist forms of knowledge?

This is probably a simplification but I'll try:

Positivism asks: What is true, measurable, and generalisable?
Within this frame, Effective Altruism privileges phenomena that can be quantified, compared, and optimised. What cannot be measured is not merely sidelined but often treated as epistemically inferior or irrelevant.

German theoretical physicist Werner Heisenberg, Nobel laureate for his foundational work in quantum mechanics, explicitly rejected positivism:

“The positivists have a simple solution: the world must be divided into that which we can say clea

... (read more)

IMO, merely 4x-ing the number of individual donors or the frequency of protests isn't near the threshold for "mass social change" in the animal welfare area.

3
Dylan Richardson
Yes, you are probably right. I just threw that out as a stand-in for what I'm looking for. Ending all factory farming is too high a bar (and might just happen due to paper clipping instead!). Maybe 10-20x-ing donor numbers is closer? I'd reference survey data instead, but public opinions are already way ahead of actual motivations. But maybe "cited among top 10 moral problems of the day" would work. Could also be numbers of vegans.

"Individual donors shouldn't diversify their donations"

Arguments in favor:

  • this is the strategy that maximizes the benefit to the recipients

Arguments against:

  • it's personally motivating to stay in touch with many causes
  • when each cause comes up in a conversation with non-EAs, you can mention you've donated to it
1
Benton 🔸
Another argument against: moral uncertainty

I'm not a lawyer but this sounds... questionably legal.

2
Jason
The most likely problem is that a donor must reduce the amount of their deduction by the amount of the personal benefit they received as a result of the donation. That the benefit is bestowed by a third party doesn't change the result. Here, the donor is receiving a very real and substantial personal benefit (release from their personal indebtedness). So after the reduction, their donation amount is $0.
2
NickLaing
Why would this be illegal?

Can I take you up on the offer to do a video call and see if we can install it on Chrome OS? Will DM you

1
Damin Curtis🔹
LMK what the outcome is (if it works smoothly on chromebook after you do this stuff)!
3
Christoph Hartmann 🔸
DMed you

In the same way that two human super powers can't simply make a contract to guarantee world peace, two AI powers could not do so either. 

That's not true. AI can see (and share) its own code.

Just want to note that I think this comment has basically been vindicated in the three years since FTX.

I love this idea, and I think you're on to something with

We don't notice how much of EA's "independent thinking" comes from people who can afford to do it.

(but I disagree-voted because I don't think "EA should" do this; I doubt it's cost-effective)

I got to the terminal but wasn't able to access the download and gave up at that step because for some reason I assumed it would only install the app for the linux development environment as opposed to the rest of Chrome OS. I'll try again, and email you if I can't get it working.

Is it possible to use it on Chrome OS somehow? It auto-detects that as Linux but I think it won't work if I use the Linux installer. I'm pretty sure it would be installable as a browser add-on but then not sure if it would work when you're using other programs.

1
Christoph Hartmann 🔸
I don't have one so we'd have to try this together but according to ChatGPT you can activate Linux in your Chrome OS. Open Settings → Developers. Under “Linux development environment (Beta)”, click Turn on. Then you should get a terminal. And from the terminal you should be able to execute the app (./path-to-file --no-sandbox). Might need to install dependencies before... if you want give it a try and DM me - we can do a video call and see if we get it working together

This isn't deontology, it's lexical-threshold negative utilitarianism.

https://reducing-suffering.org/three-types-of-negative-utilitarianism/

For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch)

Really? I think it would be the opposite: LTFF grantees are the most persistent and accomplished applicants and are therefore the least likely to end up as bycatch.

Strongly agree with this post. I think my session at EAG Boston 2024 (audience forecasting, which was fairly group-brainstormy) was suboptimal for exactly the reasons you mentioned.

Robi Rahman🔸
*4
0
0
20% disagree

I think most of us should get direct work jobs, and the E2G crowd should do high-EV careers (to the extent that they're personally sustainable), even if risky.

No, that wouldn't prove moral realism at all. That would merely show that you and a bunch of aliens happen to have the same opinions.

4
Owen Cotton-Barratt
See my response to Manuel -- I don't think this is "proving moral realism", but I do think it would be pointing at something deeper and closer-to-objective than "happen to have the same opinions".
Robi Rahman🔸
2
1
0
100% disagree

Morality is Objective

There's no evidence of this, and the burden of proof is on people who think it's true. I've never even heard a coherent argument in favor of this proposition without assuming god exists.

This doesn't answer the question for people who live in high-income countries and don't feel envy. Should they abstain? Should they answer about whether they would envy someone in their own position if they were less advantaged?

4
Yi-Yang
I'm capturing "vibes" here so this might be confusing... If you generally feel a lot of happiness for other EAs' advantages, then disagree-vote.  If you feel neutral or conflicted, I would abstain.  If you feel generally more envious, then agree-vote. Was I able to clarify things? 

If you're someone with an impressive background, you can answer this by asking yourself if you feel that you would be valued even without that background. Using myself as an example, I...

  1. went to a not so well-known public college
  2. worked an unimpressive job
  3. started participating in EA
  4. quit the unimpressive job, studied at fancy university
  5. worked at high-status ingroup organizations
  6. posted on the forum and got upvotes

Was I warmly accepted into EA back when my resume was much weaker than it is now? Do I think I would have gotten the same upvotes if I had posted an... (read more)

EA Forum posts have been pretty effective in changing community direction in the past, so the downside risk seems low

But giving more voting power to people with lots of karma entrenches the position/influence of people who are already high in the community based on its current direction, so it would be an obstacle to the possibility of influencing the community through forum posts.

If you think it's important for forum posts to be able to change community direction, you should be against vote power scaling with karma.

This presupposes that the way something gets to change community direction is by having high karma, while I think it's actually about being well reasoned and persuasive AND being viewed. Being high karma helps it be viewed, but this is neutral to actively negative if the post is low quality/flawed - that just entrenches people in their positions more/makes them think less of the forum. So in order for this change to help, there must be valuable posts that are low karma that would be high karma if voting was more democratic - I personally think that the current system is better at selecting for quality and this outweighs any penalty to dissenting opinions, which I would guess is fairly minor in practice

9
abrahamrowe
I think my view is that while I agree in principle it could be an issue, the voting has worked this way for long enough that I'd expect more evidence of entrenching to exist. Instead, I still see controversial ideas change people's minds on the forum pretty regularly and not be downvoted to oblivion, and see low quality or bad faith posts/comments get negative karma, and I think that's the upside of the system working well.

@Ben Kuhn has a great presentation on this topic. Relatedly, nonprofits have worse names: see org name bingo

Hey! You might be interested in applying to the CTO opening at my org:

https://careers.epoch.ai/en/postings/f5f583f5-3b93-4de2-bf59-c471a6869a81

(For what it's worth, I don't think you're irrational, you're just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don't do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don't do anything about it, they made the right call and you'd be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn't irrational to be upset.)

Thanks, that's a great reason to downvote my comment and I appreciate you explaining why you did it (though it has gotten some upvotes so I wouldn't have noticed anyone downvoted except that you mentioned it). And yes, I misread whom your paragraph was referring to; thanks for the clarification.

However, you're incorrect that those factual errors aren't relevant. Your feelings toward EA leadership are based on a false factual premise, and we shouldn't be making decisions about branding with the goal of appealing to people who are offended based on their own misunderstanding.

5
conflictaverse
Cool, I adjusted my vote, thanks for addressing.  I think there's something to what you're saying about factual errors, but not at the level of diagnosing the problem. Instead, I'd argue that whether or not my opinion is based on factual errors[1] is more relevant to the treatment than the diagnosis.  Let's say for arguments sake that I'm totally wrong: I got freaked out by an EA influencer, I approached EA leaders, they gave me a great response, and yet here I am complaining on the EA forum about it. My claim, though, isn't that EA leaders doing something wrong leads to EA-adjacency. It's that people feeling like EA leaders have done wrong leads to EA-adjacency.  Given that what I was trying to emphasize is the cause of the behavior, whether someone having a sense of being betrayed by leadership is based on reality or a hallucination is irrelevant - it's still the explanation for why they are not acknowledging their EA connections (I am positing).  However, you are definitely correct that when strategizing how to address EA adjacency/brand issues, if that's something you want to try to do, it helps to know whether the feelings people are having are based on facts or some kind of myth. In the case of the FTX trauma, @Mjreard is pointing out that there may be a myth of some sort at play in the minds of the people doing the denying. In the case of brand confusion, I think the root cause is something in lack of clarity around how EA factions relate to each other. In the case of leadership betrayal, I'd argue it's because the people I spoke with genuinely let me down, and you might argue it's because I'm totally irrational or something :) But nevertheless, identifying the feeling I'm having is still useful to begin the conversation.  1. ^ Obviously, I don't think my opinion is based on factual errors, but that's neither here nor there. 

Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became "EA Adjacent" when Scott Alexander's followers attacked a journalist for daring to scare him a little -- that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values.

  1. Scott Alexander isn't in EA leadership
  2. This is also extremely factually inaccurate - every clause in the part of your comment I've italicized is at least half false.
7
conflictaverse
I downvoted this comment because it's not relevant to the purpose of this conversation. I shared my personal opinion to illustrate a psychological dynamic that can occur; the fact that you disagree with me about Scott does not invalidate the pattern I was trying to illustrate (and in fact, you missed the point that I was referring to CEA staff and others I spoke with afterwards as EA leadership, not Scott).  If you think for some reason our disagreement about Scott Alexander is relevant to potential explanations for people refusing to acknowledge their relationship to EA, please explain that and I will revise my comment here.  I will acknowledge that my description is at least a little glib, but I didn't take that much time to perfect how I was describing my feelings about Scott because it wasn't relevant to my point. 

This is actually disputed. While so-called "bird watchers" and other pro-bird factions may tell you there are many birds, the rival scientific theory contends that birds aren't real.

  • Birds are the only living animals with feathers.

That's not true, you forgot about the platypus.

EA Forum April Fools post not complete without incorrect gotcha nitpick comment!

When a reward or penalty is so small, it is less effective than no incentive at all, sometimes by replacing an implicit incentive.

In the study, the daycare had a problem with parents showing up late to pick up their kids, making the daycare staff stay late to watch them. They tried to fix this problem by implementing a small fine for late pickups, but it had the opposite of the intended effect, because parents decided they were okay with paying the fine.

In this case, if you believe recruiting people to EA does a huge amount of good, you might think that it's very valuable to refer people to EAG, and there should be a big referral bounty.

From an altruistic cause prioritization perspective, existential risk seems to require longtermism

No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.

When I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. I

... (read more)

Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.

By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it's also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.

I included the qualifier "From an altruistic cause prioritization perspective" because I think that from an impartial cause prioritization perspective, the case is different. If you're comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.

working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn't likely to lead to extinction

Ah yes I get it now. Thanks!

2
Toby Tremlett🔹
No worries!

What is maxevas? Couldn't find anything relevant by googling.

Hope I'm not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.

[This comment is no longer endorsed by its author]Reply
4
Toby Tremlett🔹
I think Owen is voting correctly Robi - he disagrees that there should be more work on extinction reduction before there is more work on improving the value of the future. (to complicate this, he is understanding working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn't likely to lead to extinction).  Apologies if the "agree" "disagree" labelling is unclear - we're thinking of ways to make it more parsable. 
Robi Rahman🔸
2
0
0
93% agree

On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.

(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it's invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)

stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID

I'm out of the loop, who's this allegedly EA person who works at DOGE?

6
AnonymousTurtle
Many people claim that Elon Musk is an EA person, @Cole Killian has an EA Forum account and mentioned effective altruism on his (now deleted) website, Luke Farritor won the Vesuvius Challenge mentioned in this post (he also allegedly wrote or reposted a tweet mentioning effective altruism, but I can't find any proof and people are skeptical)

The idea of haggling doesn't sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative.

Counterpoint: some people are more price-sensitive than typical consumers, and really can't afford things. If we prohibit or stigmatize haggling, society is leaving value on the table, in terms of sale profits and consumer surplus generated by transactions involving these more financially constrained consumers. (When the seller is a monopolist, they even introduce opportunities like this through the more sinister-sounding practice of price discrimination.)

I think EA's have the mental strength to handle diverse political views well.

No, I think you would expect EAs to have the mental strength to handle diverse political views, but in practice most of them don't. For example, see this heavily downvoted post about demographic collapse by Malcolm and Simone Collins. Everyone is egregiously misreading it as being racist or maybe just downvoting it because of some vague right-wing connotations they have of the authors.

If you don't aim to persuade anyone else to agree with your moral framework and take action along with you, you're not doing the most good within your framework.

(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don't care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)

4
Karthik Tadepalli
I'm not sure what you're looking for. I've made it clear that I'm not here to persuade you of my position, and I'm not going to be philosophically strongarmed into doing so. I was just trying to elaborate on a view that I suspect (and upvotes suggest) is common to other people who are not persuaded by Vasco's argument.
9
Erich_Grunewald 🔸
Karthik could also believe that any attempt to persuade someone to do what Karthik believes is best, would backfire, or that it is intrinsically wrong to persuade another person to do what Karthik believes is good, if they do not already believe the thing is good anyway. Though I agree with the general thrust of your comment.

embrace of the "Meat-Eater Problem" inbuilt into both the EA Community and its core ideas

Embrace of the meat-eater problem is not built into the EA community. I'm guessing a large majority of EAs, especially the less engaged ones who don't comment on the Forum, would not take the meat-eater problem seriously as a reason we ought to save fewer human lives.

I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.

Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?

I actually found it more persuasive that buying broilers from a reformed scenario seems to get you both a reduction in pain and a more climate-positive outcome

How did you conclude that? How are the broilers reformed to not be painful?

4
huw
I am not very familiar with the terminology, but from context clues such as: That ‘conventional scenario’ is referring to conditions a la most factory farming, and ‘reformed scenario’ is referring to more humane conditions, including free range. But there’s a good chance I just misinterpreted this? Regardless, whatever you think the reformed scenario is, it sure seems like it would be advantageous to switch your chicken consumption to it!

Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.

 

Now that the election is over, I'd love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.

Load more