I upvoted this offer. I have an alert for bet proposals on the forum, and this is the first genuine one I've seen in a while.
It seemed suboptimal ([x] marks things I've done, [ ] marks things I should have done but have not gotten around to).
Yes, and also I was extra-skeptical beyond that because you were getting a too small amount of early traction.
Iirc I was skeptical but uncertain about GiveWiki/your approach specifically, and so my recommendation was to set some threshold such that you would fail fast if you didn't meet it. This still seems correct in hindsight.
In practice I don't think these trades happen, making my point relevant again.
My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview's preferred cause area will always win out in utility calculations
I'm not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign ...
Unflattering things about the EA machine/OpenPhil-Industrial-Complex', it's titled "Unflattering thins about EA". Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to 'the EA machine', which seems to further reduce to OpenPhil
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it's like: oh well, I guess I'm now optimizing for getting funding from Open Phil/getting hired at this limited set of in...
Hey Nuño,
I've updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the 'switcheroo' you mention is problematic, and a lot of the 'EA machinery' should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you're pointing to, but to me that dynamic isn't EA.[1]
As for not be...
I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant's What is Effective Altruism? How could it be improved? post
I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.
I see saying that I disagree with the EA Forum's "approach to life" rubbed you the wrong way. It seemed low cost, so I've changed it to something more wordy.
Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other people's impressions, I decided to share it more widely.
The examples Nuño gives...
You are picking on the weakest example. The strongest one might be...
Over the last few years, the EA Forum has taken a few turns that have annoyed me:
Just as a piece of context, the EA Forum now has about ~8x more active users than it had at the beginning of those few years. I think it's uncertain how good growth of this type is, but it's clear that the forum development had a large effect in (probably) the intended direction of the people who run the forum, and it seems weird to do an analysis of the costs and benefits of the EA Forum without acknowledging this very central fact.
(Data: https://data.centreforeffectivealtruism.org/)
I don't have data readily available for the pre-CEA EA Forum ...
The counterfactual value of Alice is typically calculated as the value if Alice didn't exist or didn't participate. If both Alice and Bob are necessary for a project, the counterfactual value of each is the total value of the project.
I agree that you can calculate conditionals in other ways (like with Shapley values), and that in that case you get more meaningful answers.
In case it's of interest, I've pointed my light EA Forum client to it as well; you can see it @ https://animals.nunosempere.com/frontpage
I really liked it
This is an understatement. At the time, I thought they were the best teachers I'd ever had, the course majorly influenced my perspective in life, they've provided useful background knowledge, etc.
I have a review of two courses within it here. I really liked it. Given your economics major, though, my sense is that you might find some of the initial courses too basic. That said, they should be free online, so you might as well listen to the first/a random lecture to see if you are getting something out of it.
Someone reminded me that I have an admonymous. If some of y'all feel like leaving some anonymous feedback, I'd love to get it and you can do so here: https://admonymous.co/loki
No, 3% is "chance of success". After adding a bunch of multipliers, it comes to about 0.6% reduction in existential risk over the next century, for $8B to $20B.
I happen to disagree with these numbers because I think that numbers for effectiveness of x-risk projects are too low. E.g., for the "Small-scale AI Misalignment Project": "we expect that it reduces absolute existential risk by a factor between 0.000001 and 0.000055", these seem like many zeroes to me.
Ditto for the "AI Misalignment Megaproject": $8B+ expenditure to only have a 3% chance of success (?!), plus some other misc discounting factors. Seems like you could do better with $8B.
In case it's of interest, you can see some similar algebraic manipulations here: https://git.nunosempere.com/personal/squiggle.c/src/branch/master/squiggle_more.c#L165, as well as some explanations of how to get a normal from its 95% confidence interval here: https://git.nunosempere.com/personal/squiggle.c/src/branch/master/squiggle.c#L73.
Manifund funding went to... LTFF
This is explained by LTFF/Open Philanthropy doing the imho misguided matching. This has the effect of diverting funding from other places for no clear gain. A lump sum would have been a better option
To elaborate a bit on the offer in case other people search the forum for printing to pdfs, this happens to be a pet issue. See here: for a way to compile a document like this to a pdf like this one. I am very keen on the method. However, it requires people to be on Linux, which is a nontrivial difficulty. Hence the offer.
I have extracted top questions to here: https://github.com/NunoSempere/clarivoyance/blob/master/list/top-questions.md with the Linux command at the top of the page. Hope this is helpful enough.
You might want to use this alternative frontend: https://forum.nunosempere.com . Also happy to produce an nicely formatted one if you tell me which one it is.
In 2021, we spent $6.9m and ended the year with 29 staff. This is not an apples-to-apples comparison, because those staff include five members of what was then the CEA ops team, and is now the EV Ops team, so the more direct comparison is with 24 staff at that time.
You can see on our dashboard some of the ways our programs have changed since 2021 (three in-person EAG events compared to one, nine EAGx events compared to zero, etc).
one of the BOTECs was about the forum
I don't think you can justify a $2M/year expenditure with an $11k/year BOTEC ($38/hour * 6 hours/week * 52 weeks), because I think that the correct level at which expenditure in the forum should be considered marginal is closer to $1M/year than $10k/year.
Yeah, good catch, my argument has a bunch of unstated assumptions.
I think I'm saying something with an additional twist, which is: because I think that the marginal value forum funding is so low, I think the correct move is to not support CEA at all.
Consider CEA as having (numbers here are arbitrary), a core of $15M in valuable projects and $20M in "cruft"; projects that made sense when there was unlimited FTX money around but not so much now. Open Phil, seeing this, reduces funding from $35M/year to $30M, to force CEA to cull some of that cruft.
In respons...
CEA’s spending in 2023 is substantially lower than in 2022: down by $4.8 - 5.8 million.
The graph below shows our budget as it stood early in the year, reflecting our pre-FTX plans, and compares that to how our plans and spending have evolved as we’ve adapted to the new funding environment. This has happened during an Interim period in which we’ve tried where possible not to make hard-to-reverse changes that constrain the options of a new CEO.
We currently have the same number of Core staff that we did at the end of 2022 (37), but staff costs are a relativel...
we have been able to produce results supporting the impact potential of PIBBSS’ core epistemic bet
Can you say more? For example, this reflection doesn't link to research results.
challenging her to bet on her success
Note that if she bets on her success and wins, she can extract money from the doubters, in a way which she couldn't if the doubters restricted themselves to mere talk. The reciprocal is also true, though.
Therefore, I expect marginal funding that we raise from other donors (i.e. you) to most likely go to the following:
- Community Building Grants [...] $110,000
- Travel grants for EA conference attendees [...] $295,000
- EA Forum [...] [Nuño: note no mention of the cost in the EA forum paragraph]
You don't mention the cost of the EA forum, but per this comment, which gives more details, and per your own table, the "online team", of which the EA Forum was a large part, was spending ~$2M per year.
As such I think that your BOTECs are uninformative and might be "hid...
> As such I think that your BOTECs are uninformative and might be "hiding the ask"
Thanks for the comment. Just to clarify right away: the Forum doesn’t have $2M room for more funding (it’s not the case that a huge portion of marginal donations would go to the Forum).
Responding in more detail:
I'm not sure I'm interpreting you correctly, but I think you are saying something like:
...if the point is to equalize consumption
There isn't any one point, I'm rather pointing out that if you make these adjustments, you create a bunch of incentives:
Re: Last point. The hiring manager can/would/does take into account the cost/benefit of the location/specific candidate when deciding which offers to make. It’s an all things considered decision.
Depending on the person's location, we adjust 50% of the base salary by relative cost-of-living as a starting point, and make ~annual adjustments to account for factors like inflation and location-based cost-of-living changes.
I've seen this elsewhere and I'm not convinced. It subsidizes people living to areas with higher cost of living, which doesn't seem like an unalloyed good. Theoretically, it seems like it would be more parsimonious to give people a salary and let people spend it as they choose, which could include luxury goods like rent in expensive places but wouldn't be limited to it.
I'd tend to agree with that if potential employees came with no / limited location history. For instance, I would be more open to this system for hiring new graduates than for hiring mid-career professionals.
While the availability of true 100% remote, location-flexible jobs has blossomed in the last few years, those jobs still are very much in the minority and were particularly non-existent for those of us who started our careers 10-15 years ago. We acted in reliance on the then-dominant nature of work, in which more desirable careers with greater sa...
I've been having discussions around adjacent topics with my (much more lefty, less priviledged) partner. Some thoughts, on the callous end:
Answer #1: Work with the system. Find some way for poorer and richer people to both gain from working together. This probably looks like commerce, like trade that both parties benefit from, and like indoctrinating/helping the members of your network acquire the skills and stances to be more "productive members of society".
You might be thinking of this GPI paper:
given sufficient background uncertainty about the choiceworthiness of one’s options, many expectation-maximizing gambles that do not stochastically dominate their alternatives ‘in a vacuum’ become stochastically dominant in virtue of that background uncertainty
It has the point that with sufficient background uncertainty, you will end up maximizing expectation (i.e., you will maximize EV if you take stochastically dominated actions). But it doesn't have the point that you would add worldview diversification, though.
I am curious about whether you might consider abandon worldview diversification, aim to have parsimonious exchange rates between your cause areas, have more frequent rebalancings, etc.
In a sense, increasing your bar for global health means that you are already doing some of this, and your committment to worldview diversification seems much watered down?
This isn’t to say that the Forum can claim 100% counterfactual value for every interaction that happens in this space
This isn't a convincing less of analysis to me, as these two things can both be true at the same time:
i.e., you don't seem to be thinking on the margin.
This answer seems very diplomatically phrased, and also compatible with many different probabilities for a question like: "in the next 10 years, will any nuclear capable states (wikipedia list to save some people a search) cease to be so"
1/2-2/3 of people to already have sunscreen in their group and likely using their own
Yeah, good point; on the back of my mind I would have been inclined to model this not as the sunscreen going to those who don't have it, but as having some chance of going to people who would otherwise have had their own.
Nice!
Two comments:
There isn't actually any public grant saying that Open Phil funded Anthropic
I was looking into this topic, and found this source:
Anthropic has raised a $124 million Series A led by Stripe co-founder Jaan Tallinn, with participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research and Eric Schmidt. The company is a developer of AI systems.
Speculating, conditional on the pitchbook data being correct, I don't think that Moskovitz funded Anthropic because his object level beliefs about their value or because they're such good pals, r...
One possible path is to find a good leader that can scalably use labour and follow him?