I’d be very surprised if you can’t get a job that pays much more than the sub teacher role- the gap between that and ~any EA org job is massive and inability to get the latter is only very weak evidence of inability to earn more.
Sorry if I missed this but this does depend a lot on location/willingness to move. The above assumes If you’re in the US and willing to move cities.
Also, living frugally to donate more is of course very virtuous if you take your salary to be a given, but from an altruistic perspective, insofar as they trade off, it’s probably much ...
Random sorta gimmicky AI safety community building idea: tabling at universities but with a couple laptop signed into Claude Pro with different accounts. Encourage students (and profs) to try giving it some hard question from eg a problem set and see how it performs. Ideally have a big monitor for onlookers to easily see.
Most college students are probably still using ChatGPT-3.5, if they use LLMs at all. There’s a big delta now between that and the frontier.
I have a vague fear that this doesn't do well on the 'try not to have the main net effect be AI hypebuilding' heuristic.
I made a custom GPT that is just normal, fully functional ChatGPT-4, but I will donate any revenue this generates[1] to effective charities.
Presenting: Donation Printer
OpenAI is rolling out monetization for custom GPTs:
Builders can earn based on GPT usage
In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.
This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.
Yeah but my (implicit, should have made explicit lol) question is “why this is the case?”
Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.
Interesting that the Animal Welfare Fund gives out so few small grants relative to the Infrastructure and Long Term Future funds (Global Health and Development has only given out 20 grants, all very large, so seems to be a more fundamentally different type of thing(?)). Data here.
A few stats:
Proportions under $threshold...
In their most straightforward form (“foundation models”), language models are a technology which naturally scales to something in the vicinity of human-level (because it’s about emulating human outputs), not one that naturally shoots way past human-level performance
- i.e. it is a mistake-in-principle to imagine projecting out the GPT-2—GPT-3—GPT-4 capability trend into the far-superhuman range
Surprised to see no pushback on this yet. I do not think this is true; I've come around to thinking that Eliezer is basically right that the limit of next token predict...
Sorry, I think you're reading me as saying something like "language models scaled naively up don't do anything superhuman"? Whereas I'm trying to say something more like "language models scaled naively up break the trend line in the vicinity of human level, because the basic mechanism for improved capabilities that they had been using stops working, so they need to use other mechanisms (which probably move a bit slower)".
If you disagree with that unpacking, I'm interested to hear it. If you agree with the unpacking and think that I've done a bad job summar...
For others considering whether/where to donate: RP is my current best guess of "single best charity to donate to all things considered (on the margin - say up to $1M)."
FWIW I have a manifold market for this (which is just one source of evidence - not something I purely defer to. Also I bet in the market so grain of salt etc).
Strongly, strongly, strongly agree. I was in the process of writing essentially this exact post, but am very glad someone else got to it first. The more I thought about it and researched, the more it seemed like convincingly making this case would probably be the most important thing I would ever have done. Kudos to you.
FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.
I also made these interactive plots which summarise all EA funding:
[On mobile; sorry for the formatting]
Given my quick read and especially the bit below, it seems like the title is at least a bit misleading.
Quote: “To be clear: this document is not a detailed vindication of any particular class of philanthropic interventions. For example, although we think that contractualism supports a sunnier view of helping the global poor than funding x-risk projects, contractualism does not, for all our argument implies, entail that many EA-funded global poverty interventions are morally preferable to all other options (some of which...
LessWrong has a new feature/type of post called "Dialogues". I'm pretty excited to use it, and hope that if it seems usable, reader friendly, and generally good the EA Forum will eventually adopt it as well.
I'm interested in supporting this financially (that sounds like something a rich person would say so I should clarify this would not be a ton of money lol) and possibly in other ways as well (e.g., helping set up a website)
At least some chance of a less terrible death later, no? I'm really not sure what the distribution of causes of death looks like for different types of wild animal hosts
New fish data with estimated individuals killed per country/year/species (super unreliable, read below if you're gonna use!)
That^ is too big for Google Sheets, so here's the same thing just without a breakdown by country that you should be able to open easily if you want to take a look.
Basically the UN data generally used for tracking/analyzing the amount of fish and other marine life captured/farmed and killed only tracks the total weight captured for a given country-year-species (or group of species).
I had chatGPT-4 provide estimated lo...
Good point, and I'll throw out The Humane League as one specific recipient of money.
Farmed animal welfare is politically controversial in a way that GiveWell is not. This is potentially bad:
Is OpenPhil's current support of farmed animal welfare politically controversial? I don't get that sense but, if so, among who?
Maybe people who don't care about farmed animals are correct
Sure but same goes for literally everything, including eg AMF being net positive. Happy to discuss object level though.
...Farmed animal advocacy is so cost-effective because, if succ
I’ve argued this largely on Twitter, but it seems pretty clear to me that no marginal dollars at all, at least up to say $1B, should in fact be going to the GiveWell portfolio (or similar charities for that matter). I don’t think it’s obvious what the alternative should be, but do think that (virtually) no well informed person trying to allocate a marginal dollar most ethically would conclude that GiveWell is the best option.
I feel like this/adjacent debates often gets framed as “normal poverty stuff vs weird longtermist stuff” but a lot of my confidence i...
Thanks for pointing that out, Aaron!
I feel like this/adjacent debates often gets framed as “normal poverty stuff vs weird longtermist stuff” but a lot of my confidence in the above comes from farmed animal welfare strictly dominating GiveWell in terms of any plausibly relevant criteria save for maybe PR.
I do not agree with the "any plausibly relevant criteria" part. However, I do think the best interventions to help farmed animals increase welfare way more cost-effectively than GiveWell's top charities. Some examples illustrating this:
What specifically in farmed animal welfare do you think beats GiveWell? (GiveWell is a specific thing you can actually donate money to; "farmed animal welfare" is not)
Farmed animal welfare is politically controversial in a way that GiveWell is not. This is potentially bad:
- Maybe people who don't care about farmed animals are correct
- Farmed animal advocacy is so cost-effective because, if successful, it forces other people (meat consumers? meat producers?) to bear the costs of treating animals better. I'm less comfortable spending other people's money to ...
a lot of my confidence in the above comes from farmed animal welfare strictly dominating GiveWell in terms of any plausibly relevant criteria save for maybe PR
Well some people might have ethical views or moral weights that are extremely favourable to people-focused interventions.
Or people could really value certainty of impact, and the evidence base could lead them to be much more confident that marginal donations to GiveWell charities have a counterfactual impact than marginal donations to animal welfare advocacy orgs.
FWIW I'm more likely to donate to ani...
[Epistemic status: unsure how much I believe each response but more pushing back against that "no well informed person trying to allocate a marginal dollar most ethically would conclude that GiveWell is the best option."]
According to Kevin Esvelt on the recent 80,000k podcast (excellent btw, mostly on biosecurity), eliminating the New World New World screwworm could be an important farmed animal welfare (infects livestock), global health (infects humans), development (hurts economies), science/innovation intervention, and most notably quasi-longtermist wild animal suffering intervention.
More, if you think there’s a non-trivial chance of human disempowerment, societal collapse, or human extinction in the next 10 years, this would be important to do ASAP because we may...
EAG(x)s should have a lower acceptance bar. I find it very hard to believe that accepting the marginal rejectee would be bad on net.
Are you factoring in that CEA pays a few hundred bucks per attendee? I'd have a high-ish bar to pay that much for someone to go to a conference myself. Altho I don't have a good sense of what the marginal attendee/rejectee looks like.
How right now is "right now"? Like would giving $100 literally this moment be worth $105 given in a week? A month?
Just looking for something super approximate, especially a rough time horizon where $1 now ≈ $1 then
Somewhere in languagespace, there should be a combination of ~50-200 words that 1) successfully convinces >30% people that Wild Animal Welfare is really important, and then 2) they realize that the society they grew up in is confused, ill, and deranged. A superintelligence could generate this.
I don't think this is true, at least taking "convinces" to mean something more substantial than, say, marking the box for "yeah WAS is important" on a survey given immediately after reading.
It's not at all obvious to me that marginal carbon actually cashes out as bad even in expectation.
Eh I'm not actually sure how bad this would be. Of course it could be overdone, but a post's author is its obvious best advocate, and a simple "I think this deserves more attention" vote doesn't seem necessarily illegitimate to me
I think the proxy question is “after what period of time is it reasonable to assume that any work building or expanding on the post would have been published?” and my intuition here is about 1 year but would be interested in hearing others thoughts
I went ahead and made an "Evergreen" tag as proposed in my quick take from a while back:
Meant to highlight that a relatively old post (perhaps 1 year or older?) still provides object level value to read i.e., above and beyond:
- It's value as a cultural or historical artifact above
- The value of more recent work it influenced or inspired
What’s are some questions you hope someone’s gonna ask that seem relatively unlikely to get asked organically?
Bonus: what are the answers to those questions?
I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.
In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI r...
Idea/suggestion: an "Evergreen" tag, for old (6 months month? 1 year? 3 years?) posts (comments?), to indicate that they're still worth reading (to me, ideally for their intended value/arguments rather than as instructive historical/cultural artifacts)
As an example, I'd highlight Log Scales of Pleasure and Pain, which is just about 4 years old now.
I know I could just create a tag, and maybe I will, but want to hear reactions and maybe generate common knowledge.
Thanks! Let me write them as a loss function in python (ha)
For real though:
I'm pretty happy with how this "Where should I donate, under my values?" Manifold market has been turning out. Of course all the usual caveats pertaining to basically-fake "prediction" markets apply, but given the selection effects of who spends manna on an esoteric market like this I put a non-trivial weight into the (live) outcomes.
I guess I'd encourage people with a bit more money to donate to do something similar (or I guess defer, if you think I'm right about ethics!), if just as one addition to your portfolio of donation-informing considerations.
Even given no electricity, copies stored physically in e.g. a flash drive or hard drive would persist until electricity could be supplied, I'm almost certain
Just chiming in to say I have a similar situation, although less extreme. Was vegan for 4 years and eventually concluded it wasn’t sustainable or realistic for me. Main animal products I buy are grass fed beef, grass fed whey protein, eggs from brands that at least go to decent lengths to make themselves seem non-horrible (3rd party humane certified, outdoor access) and a bit of conventional dairy (cheese, butter). I’d be lying if I said I’ve never bought anything “worse” than those, though.
I’ve definitely thought about this and short answer: depends on who “we” is.
A sort of made up particular case I was imagining is “New Zealand is fine, everywhere else totally destroyed” because I think it targets the general class of situation most in need of action (I can justify this on its own terms but I’ll leave it for now)
In that world, there’s a lot of information that doesn't get lost: everything stored in the laptops and servers/datacenters of New Zealand (although one big caveat and the reason I abandoned the website is that I lost confidence tha...
I have only a vague idea what this means but yeah, whatever facilitates access/storage. Is there anything I should do?
It’s actually been a little while since I made it, but places most likely to both (1) not be direct targets of a nuclear attack and (2) be uncorrelated with the fates of major datacenters plausibly holding the information currently
I tried making a shortform -> Twitter bot (ie tweet each new top level ~quick take~) and long story short it stopped working and wasn't great to begin with.
I feel like this is the kind of thing someone else might be able to do relatively easily. If so, I and I think much of EA Twitter would appreciate it very much! In case it's helpful for this, a quick takes RSS feed is at https://ea.greaterwrong.com/shortform?format=rss
Seems like the forces that turn people crazy are the same ones that lead people to do anything good and interesting at all. At least for EA, a core function of orgs/elites/high status community members is to make the kind of signaling you describe highly correlated with actually doing good. Of course it seems impossible to make them correlate perfectly, and that’s why setting with super high social optimization pressure (like FTX) are gonna be bad regardless.
But (again for EA specifically) I suspect the forces you describe would actually be good to increas...
Hypothesis: from the perspective of currently living humans and those who will be born in the currrent <4% growth regime only (i.e. pre-AGI takeoff or I guess stagnation) donations currently earmarked for large scale GHW, Givewell-type interventions should be invested (maybe in tech/AI correlated securities) instead with the intent of being deployed for the same general category of beneficiaries in <25 (maybe even <1) years.
The arguments are similar to those for old school "patient philanthropy" except now in particular seems exceptionally uncerta...
I'm skeptical of this take. If you think sufficiently transformative + aligned AI is likely in the next <25 years, then from the perspective of currently living humans and those who will be born in the current <4% growth regime, surviving until transformative AI arrives would be a huge priority. Under that view, you should aim to deploy resources as fast as possible to lifesaving interventions rather than sitting on them.
Made a podcast feed with EAG talks. Now has both the recent Bay Area and London ones:
Full vids on the CEA Youtube page
Not OP but here are some "user problems" either I have or am pretty sure a bunch of people have:
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related.
I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied.
Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment - it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’
Note: this sounds like it was written by chatGPT because it basically was (from a recorded ramble)🤷
I believe the Forum could benefit from a Shorterform page, as the current Shortform forum, intended to be a more casual and relaxed alternative to main posts, still seems to maintain high standards. This is likely due to the impressive competence of contributors who often submit detailed and well-thought-out content. While some entries are just a few well-written sentences, others resemble blog posts in length and depth.
As such, I find myself hesitant...
Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn't capture the incoming audio (i.e. everything Nathan said) 😢
Guess I'll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I'd ideally like but 🤷
...Donations and Consistency in Effective Altruism
I believe that effective altruists should genuinely strive to practice effective altruism
Automated interface between Twitter and the Forum (eg a bot that, when tagged on twitter, posts the text and image of a tweet on Quick Takes and vice versa)