All of NunoSempere's Comments + Replies

One possible path is to find a good leader that can scalably use labour and follow him?

8
Brad West
9d
Yeah, a lot of interventions/causes/worldviews that have power in EA will have more than adequate resources to do what they are trying to do. This is why, to some extent, "getting a job at an EA org" may not be a particularly high EV move because it is not clear that the counterfactual employee would be worse than you (although, this reasoning is somewhat weakened by the fact that you could ostensibly free an aligned person to do other work, and so on). Lending your abilities and resources to promising causes/etc. that do not have power behind them is probably a way that someone of mediocre abilities could have high impact, perhaps much more impact than much more talented people serving well-resourced masters. Of course, the trick here would be identifying what are these "promising", neglected areas, especially when the lack of attention by the powers that be may be interpreted as a lack of merit.

I upvoted this offer. I have an alert for bet proposals on the forum, and this is the first genuine one I've seen in a while.

3
harfe
11d
You are right, the page does contain the phrase "Give 10% of your income each year".(Somehow google has not picked it up so I did not find it). I think GWWC has made a mistake here. The text of the actual pledge does not have this constraint. Maybe @graceadams or someone else from GWWC can clarify things and fix the formulation on their website?

This seems against the wording of "Give 10% of your income each year"

3
harfe
12d
Why are you under the impression that you have to give each year? I tried to google your quoted string but could not find an exact match. As I interpret the GWWC pledge formulation, there is no condition when you have to donate, just how much.

It seemed suboptimal ([x] marks things I've done, [ ] marks things I should have done but have not gotten around to).

  • [x] Saving donations for years of increased need seems like a better strategy than donating a fixed amount each year
  • [ ] Investing donations and then donating with interest seems plausibly a better option
  • [x] Having a large amount of personal savings allows me to direct my efforts more effectively by choosing not to take suboptimal opportunities for money. Having "slack", freedom of action, seems very useful.
  • [x] My time funges with money
... (read more)
8
GraceAdams
12d
Thanks for sharing, Nuno! We do have members who don't donate strictly on a yearly basis, and choose to donate every couple of years when there's something quite promising to donate to. Also donating every second (or more depending on how much you donate) year can make sense for some Americans given the tax benefits.  I think that deciding when to donate (i.e. investing or donating now) is a difficult one, and depends a lot on your worldviews etc. My take is that if you're interested in improving the lives of people now, it's generally good to donate sooner rather than later (although maybe there's a case for waiting for a specific breakthrough where you have some special knowledge about the case for impact) but outside of global health and wellbeing, I find it much harder to know.

Yes, and also I was extra-skeptical beyond that because you were getting a too small amount of early traction.

2
Dawn Drescher
24d
Yep, makes a lot of sense!

Iirc I was skeptical but uncertain about GiveWiki/your approach specifically, and so my recommendation was to set some threshold such that you would fail fast if you didn't meet it. This still seems correct in hindsight.

2
Dawn Drescher
24d
Yep, failing fast is nice! So you were just skeptical on priors because any one new thing is unlikely to succeed?

In practice I don't think these trades happen, making my point relevant again.

My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview's preferred cause area will always win out in utility calculations

I'm not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign ... (read more)

Unflattering things about the EA machine/OpenPhil-Industrial-Complex', it's titled "Unflattering thins about EA". Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to 'the EA machine', which seems to further reduce to OpenPhil

I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it's like: oh well, I guess I'm now optimizing for getting funding from Open Phil/getting hired at this limited set of in... (read more)

JWS
1mo17
0
0

Hey Nuño,

I've updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.

I think we actually agree in lots of ways. I think that the 'switcheroo' you mention is problematic, and a lot of the 'EA machinery' should get better at improving its feedback loops both internally and with the community.

I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you're pointing to, but to me that dynamic isn't EA.[1]

As for not be... (read more)

I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant's What is Effective Altruism? How could it be improved? post

I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.

I see saying that I disagree with the EA Forum's "approach to life" rubbed you the wrong way. It seemed low cost, so I've changed it to something more wordy.

Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other people's impressions, I decided to share it more widely.

The examples Nuño gives...

You are picking on the weakest example. The strongest one might be... (read more)

3
JWS
1mo
Really appreciate your reply Nuno, and apologies if I've misrepresented you, or if I'm coming across as overly hostile. I'll edit my original comment given your & Arepo's response. I think part of why I posted my comment (even though I was nervous to), is that you're a highly valued member of the community[1], and your criticisms are listening to and carrying weight. I am/was just trying to do my part to kick the tires, and distinguish criticisms I think are valid/supported from those which are less so. On the object level claims, I'm going to come over to your home turf (blog) and discuss it there, given you expressed a preference for it! Though if you don't think it'll be valuable for you, then by all means feel free to not engage.  I think there are actually lots of points where we agree (at least directionally), so I hope it may be productive, or at least useful for you if I can provide good/constructive criticism. 1. ^ I very much value you and your work, even if I disagree

Over the last few years, the EA Forum has taken a few turns that have annoyed me:

  • It has become heavier and slower to load
  • It has added bells and whistles, and shiny notifications that annoy me
  • It hasn't made space for disagreable people I think would have a lot to add. Maybe they had a bad day, and instead of working with them forum banned them.
  • It has added banners, recommended posts, pinned posts, newsletter banners, etc., meaning that new posts are harder to find and get less attention.
    • To me, getting positive, genuine exchanges in the forum as I was
... (read more)

Just as a piece of context, the EA Forum now has about ~8x more active users than it had at the beginning of those few years. I think it's uncertain how good growth of this type is, but it's clear that the forum development had a large effect in (probably) the intended direction of the people who run the forum, and it seems weird to do an analysis of the costs and benefits of the EA Forum without acknowledging this very central fact. 

(Data: https://data.centreforeffectivealtruism.org/) 

I don't have data readily available for the pre-CEA EA Forum ... (read more)

The counterfactual value of Alice is typically calculated as the value if Alice didn't exist or didn't participate. If both Alice and Bob are necessary for a project, the counterfactual value of each is the total value of the project.

I agree that you can calculate conditionals in other ways (like with Shapley values), and that in that case you get more meaningful answers.

0
David van Beveren
2mo
Neat— thanks!

I really liked it

This is an understatement. At the time, I thought they were the best teachers I'd ever had, the course majorly influenced my perspective in life, they've provided useful background knowledge, etc.

I have a review of two courses within it here. I really liked it. Given your economics major, though, my sense is that you might find some of the initial courses too basic. That said, they should be free online, so you might as well listen to the first/a random lecture to see if you are getting something out of it.

6
NunoSempere
2mo
This is an understatement. At the time, I thought they were the best teachers I'd ever had, the course majorly influenced my perspective in life, they've provided useful background knowledge, etc.

Someone reminded me that I have an admonymous. If some of y'all feel like leaving some anonymous feedback, I'd love to get it and you can do so here: https://admonymous.co/loki

No, 3% is "chance of success". After adding a bunch of multipliers, it comes to about 0.6% reduction in existential risk over the next century, for $8B to $20B.

3
Mo Putera
3mo
2 nitpicks that end up arguing in favor of your high-level point  * 2.7% (which you're rounding up to 3%) is chance of having an effect, and 70% x 2.7% = 1.9% is chance of positive effect ('success' by your wording) * your Squiggle calc doesn't include the CCM's 'intervention backfiring' part of the calc 

I happen to disagree with these numbers because I think that numbers for effectiveness of x-risk projects are too low. E.g., for the "Small-scale AI Misalignment Project": "we expect that it reduces absolute existential risk by a factor between 0.000001 and 0.000055", these seem like many zeroes to me.

Ditto for the "AI Misalignment Megaproject": $8B+ expenditure to only have a 3% chance of success (?!), plus some other misc discounting factors. Seems like you could do better with $8B.

3
Derek Shiller
3mo
I think we're somewhat bearish on the ability of money by itself to solve problems. The technical issues around alignment appear quite challenging, especially given the pace of development, so it isn't clear that any amount of money will be able to solve them. If the issues are too easy on the other hand, then your investment of money is unlikely to be needed and so your expenditure isn't going to reduce extinction risk. Even if the technical issues are in the goldilocks spot of being solvable but not trivially so, the political challenges around getting those solutions adopted seem extremely daunting. There is a lot we don't explicitly specify in these parameter settings: if the money is coming from a random billionaire unaffiliated with AI scene then it might be harder to get expertise and buy in then if it is coming from insiders or the federal government. All that said, it is plausible to me that we should have a somewhat higher chance of having an impact coupled with a lower chance of a positive outcome. A few billion dollars is likely to shake things up even if the outcome isn't what we hoped for.
1
MichaelStJules
3mo
Is that 3% an absolute percentage point reduction in risk? If so, that doesn't seem very low if your baseline risk estimate is low, like 5-20%, or you're as pessimistic about aligning AI as MIRI is.

In case it's of interest, you can see some similar algebraic manipulations here: https://git.nunosempere.com/personal/squiggle.c/src/branch/master/squiggle_more.c#L165, as well as some explanations of how to get a normal from its 95% confidence interval here: https://git.nunosempere.com/personal/squiggle.c/src/branch/master/squiggle.c#L73.

4
Vasco Grilo
3mo
Thanks for sharing, Nuño! Relatedly, I wrote about how to determine distribution parameters from quantiles.

Manifund funding went to... LTFF

This is explained by LTFF/Open Philanthropy doing the imho misguided matching. This has the effect of diverting funding from other places for no clear gain. A lump sum would have been a better option

2
Joel Becker
4mo
Fair enough, I agree.

To elaborate a bit on the offer in case other people search the forum for printing to pdfs, this happens to be a pet issue. See here: for a way to compile a document like this to a pdf like this one. I am very keen on the method. However, it requires people to be on Linux, which is a nontrivial difficulty. Hence the offer.

I have extracted top questions to here: https://github.com/NunoSempere/clarivoyance/blob/master/list/top-questions.md with the Linux command at the top of the page. Hope this is helpful enough.

2
PeterSlattery
4mo
Thank you.

You might want to use this alternative frontend: https://forum.nunosempere.com . Also happy to produce an nicely formatted one if you tell me which one it is.

0
aaron_mai
4mo
Oh cool, thanks!

Nice, do you have your costs and staff numbers for 2021?

In 2021, we spent $6.9m and ended the year with 29 staff. This is not an apples-to-apples comparison, because those staff include five members of what was then the CEA ops team, and is now the EV Ops team, so the more direct comparison is with 24 staff at that time.

You can see on our dashboard some of the ways our programs have changed since 2021 (three in-person EAG events compared to one, nine EAGx events compared to zero, etc).

one of the BOTECs was about the forum

I don't think you can justify a $2M/year expenditure with an $11k/year BOTEC ($38/hour * 6 hours/week * 52 weeks), because I think that the correct level at which expenditure in the forum should be considered marginal is closer to $1M/year than $10k/year.

Yeah, good catch, my argument has a bunch of unstated assumptions.

I think I'm saying something with an additional twist, which is: because I think that the marginal value forum funding is so low, I think the correct move is to not support CEA at all.

Consider CEA as having (numbers here are arbitrary), a core of $15M in valuable projects and $20M in "cruft"; projects that made sense when there was unlimited FTX money around but not so much now. Open Phil, seeing this, reduces funding from $35M/year to $30M, to force CEA to cull some of that cruft.

In respons... (read more)

CEA’s spending in 2023 is substantially lower than in 2022: down by $4.8 - 5.8 million.

The graph below shows our budget as it stood early in the year, reflecting our pre-FTX plans, and compares that to how our plans and spending have evolved as we’ve adapted to the new funding environment. This has happened during an Interim period in which we’ve tried where possible not to make hard-to-reverse changes that constrain the options of a new CEO.

We currently have the same number of Core staff that we did at the end of 2022 (37), but staff costs are a relativel... (read more)

2
Ben_West
5mo
Thanks for the clarification, but I'm still not sure I understand. I think your argument is: 1. CEA has projects that are worth funding (say, arbitrarily, our comms team) 2. Additionally, we have projects that are not worth funding (in particular: the Forum) 3. However to make the case for marginal funding stronger we are presenting the stuff that's worth funding as "marginal", and stating that the stuff that's not worth funding isn't "marginal". Is that correct? If so, I'm confused because the Forum is included in the list of marginal projects, which seems to violate (3). Maybe alternatively you are saying: 3'. To make the case for marginal funding stronger we are presenting BOTECs about projects other than the Forum But again this doesn't seem right to me because one of the BOTECs was about the forum.

we have been able to produce results supporting the impact potential of PIBBSS’ core epistemic bet

Can you say more? For example, this reflection doesn't link to research results.

1
Dušan D. Nešić (Dushan)
2mo
Finally out, our 2023 retrospective! https://forum.effectivealtruism.org/posts/izWpWJRoqXPLoqSv9/retrospective-pibbss-fellowship-2023 (Apologies, I don't know how to do links on mobile) I know it's too late for the ball, but my completionist mind needed to close this open question. This reflection does include research output, and even a bit of retrospective on what alumni from 2022 did.

challenging her to bet on her success

Note that if she bets on her success and wins, she can extract money from the doubters, in a way which she couldn't if the doubters restricted themselves to mere talk. The reciprocal is also true, though. 

Therefore, I expect marginal funding that we raise from other donors (i.e. you) to most likely go to the following:

  • Community Building Grants [...] $110,000
  • Travel grants for EA conference attendees [...] $295,000
  • EA Forum [...] [Nuño: note no mention of the cost in the EA forum paragraph]

You don't mention the cost of the EA forum, but per this comment, which gives more details, and per your own table, the "online team", of which the EA Forum was a large part, was spending ~$2M per year.

As such I think that your BOTECs are uninformative and might be "hid... (read more)

> As such I think that your BOTECs are uninformative and might be "hiding the ask"

Thanks for the comment. Just to clarify right away: the Forum doesn’t have $2M room for more funding (it’s not the case that a huge portion of marginal donations would go to the Forum).

Responding in more detail:

I'm not sure I'm interpreting you correctly, but I think you are saying something like:

  1. I (Ben) give three examples of where marginal funding is likely to go
  2. The first two of these total ~$400k, whereas the total cost of the forum is ~$2M
  3. Therefore, we should expe
... (read more)

...if the point is to equalize consumption

There isn't any one point, I'm rather pointing out that if you make these adjustments, you create a bunch of incentives:

  • Working at EA organizations which offer cost of living adjustments becomes more attractive to people who need them, and less attractive to nomads or internationals
  • It perpetuates the impetus behind living in extremely high cost of living places, rather than coordinating the community to, gradually, move somewhere cheaper
  • I in fact don't think that equalizing consumption is a good move, given that co
... (read more)

Re: Last point. The hiring manager can/would/does take into account the cost/benefit of the location/specific candidate when deciding which offers to make. It’s an all things considered decision.

4
Larks
5mo
If you think people in high cost of living areas are more productive on average (agglomeration effects, and their presence their is a signal that they were productive enough at their prior employment to justify the location) and their BATNA is higher (because there are many local good employers competing for them) then CoL adjustments in function as a noisy proxy for justified supply/demand curve shifts.
6
Jason
5mo
I think many of those points have force, which is part of why I generally favor only partial COL adjustment. I tentatively agree with you that orgs should generally consider the actual cost of employing each individual when making a hiring decision, such that the candidate in a lower-COL location will usually have an advantage.[1] The implied model is my head is that the prospective employee will accept a certain amount of sacrifice, but not more than that. A fully unadjusted salary expects candidates with sticky ties to a higher-COL location to sacrifice more, either by breaking those ties or by accepting a much deeper drop in standard of living / consumption than candidates in other locations. It will lead to many of those candidates opting out. Given the current distribution of EAs, unadjusted salaries would take many candidates off the table, and in many cases that would be pretty problematic. I'm thinking of an analogy to non-EA nonprofit salaries. As people have pointed out elsewhere, often these are set at a level where only those who rely on a wealthier partner or parent to provide the financial support are able to stay in those positions long-term. That's a shrewd strategy for non-EA nonprofits if their applicant pool is deep, and having the third-best rather than the best candidate in a position doesn't make a huge difference. I'm not convinced that is presently the situation in most EA orgs. The org could, of course, pay salaries high enough to keep SF-based candidates in the pool no matter where the employee was located. But its ROI for doing so does not seem high. I think many of those considerations are created by an individual's pre/non-EA life. My sense of where feels like "home" is influenced by where I grew up. If one of my goals is for my son to have a deep relationship with his grandparents, the grandparents are located where they are and the costs of semi-frequent travel are what they are. These considerations are not "create[d]" by the cand
-23
Arturo Macias
5mo

Depending on the person's location, we adjust 50% of the base salary by relative cost-of-living as a starting point, and make ~annual adjustments to account for factors like inflation and location-based cost-of-living changes.

I've seen this elsewhere and I'm not convinced. It subsidizes people living to areas with higher cost of living, which doesn't seem like an unalloyed good. Theoretically, it seems like it would be more parsimonious to give people a salary and let people spend it as they choose, which could include luxury goods like rent in expensive places but wouldn't be limited to it.

I'd tend to agree with that if potential employees came with no / limited location history. For instance, I would be more open to this system for hiring new graduates than for hiring mid-career professionals. 

While the availability of true 100% remote, location-flexible jobs has blossomed in the last few years, those jobs still are very much in the minority and were particularly non-existent for those of us who started our careers 10-15 years ago. We acted in reliance on the then-dominant nature of work, in which more desirable careers with greater sa... (read more)

1
Ryan Greenblatt
5mo
This seems like it might be a good price discrimination strategy though I'm not sure if that's the intent.

FWIW, I am amenable to being commissioned to do impact evaluation. Interested parties should contact me here.

2
Peter McIntyre
6mo
Thanks, very kind of you to say!


I've been having discussions around adjacent topics with my (much more lefty, less priviledged) partner. Some thoughts, on the callous end:

Answer #1: Work with the system. Find some way for poorer and richer people to both gain from working together. This probably looks like commerce, like trade that both parties benefit from, and like indoctrinating/helping the members of your network acquire the skills and stances to be more "productive members of society".

  • Offer richer people something they value in return. Richer people are likely to have less time for
... (read more)

You might be thinking of this GPI paper:

given sufficient background uncertainty about the choiceworthiness of one’s options, many expectation-maximizing gambles that do not stochastically dominate their alternatives ‘in a vacuum’ become stochastically dominant in virtue of that background uncertainty

It has the point that with sufficient background uncertainty, you will end up maximizing expectation (i.e., you will maximize EV if you take stochastically dominated actions). But it doesn't have the point that you would add worldview diversification, though.

I am curious about whether you might consider abandon worldview diversification, aim to have parsimonious exchange rates between your cause areas, have more frequent rebalancings, etc.

In a sense, increasing your bar for global health means that you are already doing some of this, and your committment to worldview diversification seems much watered down?

This isn’t to say that the Forum can claim 100% counterfactual value for every interaction that happens in this space

This isn't a convincing less of analysis to me, as these two things can both be true at the same time:

  • The EA Forum as a whole is very valuable
  • The marginal $1.8M spent on it isn't that valuable

i.e., you don't seem to be thinking on the margin.

because of how averages work

I think that with a strong prior, you should conclude that RP's research would be incorrect at representing your values.

Possibly, but what would your prior be based on to warrant being that strong?

This answer seems very diplomatically phrased, and also compatible with many different probabilities for a question like: "in the next 10 years, will any nuclear capable states (wikipedia list to save some people a search) cease to be so"

  • Does the conclusion flip if you don't value 30 shrimps/shrimp moments the same as a human?  
  • It might be more meaningful to present your results as a function, e.g., if you value shrimps and chicken at xyz, then the overall value is negative/positive 
  • Particularly in uncertain domains, it might have been worth it to consider uncertainty explicitly, and RP does give confidence intervals.
  • The sign of the conclusion would be the same (though significantly weaker) even if you ignore shrimp entirely, provided all other assumptions are held constant. That said, the final numbers are indeed quite sensitive to the moral weights, particularly those of chickens, shrimp, and fish as the most abundant nonhumans.
  • I agree re: the value of both a function-based version that would allow folks to put in their own weights/assumptions, and a version that explicitly considers uncertainty. I don't have plans to build these out myself, but might reconsider if there's sufficient interest, and in any case would be happy to support someone else in doing so. 

1/2-2/3 of people to already have sunscreen in their group and likely using their own

Yeah, good point; on the back of my mind I would have been inclined to model this not as the sunscreen going to those who don't have it, but as having some chance of going to people who would otherwise have had their own.

1
Ralf Kinkel
7mo
True, I think I'll change the 30,50,20 (would be sunburned, would have gotten it elsewhere, would have stayed in shadow) to 20,60,20 (would be sunburned, used own or gotten elsewhere, stayed in shadow).  

Nice!

Two comments:

  • Sunburn risk without shared sunscreen seems a bit too high; do 30% of people at such concerts get sunburnt?
  • I recently got a sunburn, and I was thinking about the daly weight. A DALY improvement of 0.1 would mean prefering the experience of 9 days without a sunburn over 10 days with a sunburn seems... ¿reasonable? But also something confuses me here.
3
Ralf Kinkel
7mo
Thanks for the feedback! About the comments: 1. I skipped over a factor there: I expect 1/2-2/3 of people to already have sunscreen in their group and likely using their own instead of a random one given by a stranger (probably also somewhat lower quality as we bought a generic brand for in this calculation). But whoever does not use the sunscreen is also not part of the cost calculation. The 30% is the sunburn risk of those who would use the shared sunscreen. If my 1/2-2/3 having sunscreen in their group assumption is correct and 30% of those that do not get a sunburn, you would expect 10-15% of people to have a sunburn at the end of such an event, of those being there early with you in the queue. Lots of people also come later in the evening, which decreases the fraction of people with sunburn you'ld see further.  2. I might be a bit off here, but the 0.035 for lower back pain seems a bit low. I think most people offered to have another day in the month but lower back pain the whole month would decline. On the other hand I'm pretty sure that most people with lower back pain would take the trade the other way. I also found this for GBD 2017, which would be a mean DW across the varieties of low back pain of around 0.11. The 0.035 would seem like a reasonable weight for the median "back pain" but not for the mean including more severe cases which is closer to my association with the term "low back pain". Table in the link: Low back pain without leg pain, mild41%0.02 (0.01–0.04)Low back pain without leg pain, moderate35%0.05 (0.04–0.08)Low back pain without leg pain, severe10%0.27 (0.18–0.37)Low back pain without leg pain, most severe14%0.37 (0.25–0.51)
4
Lorenzo Buonanno
7mo
  Initially I thought this was unreasonably high, since e.g. lower back pain has a disability weight of ~0.035. But if we try an estimate based on GiveWell valuing 37 DALY as much as 116 consumption doublings, preventing the loss of 0.1 DALYs would be equivalent to a ~24% increase in consumption for 1 year. Daily, it would mean ~$20 for a person making $30k/year. This seems surprisingly reasonable for sunburn, given that I don't think these numbers are meant to be used this way. I wonder if this equivalence of ~24% income per 0.1 disability weight is totally off (as many of these things are clearly non-linear), or can be used for similar estimates.

There isn't actually any public grant saying that Open Phil funded Anthropic

I was looking into this topic, and found this source:

Anthropic has raised a $124 million Series A led by Stripe co-founder Jaan Tallinn, with participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research and Eric Schmidt. The company is a developer of AI systems.

Speculating, conditional on the pitchbook data being correct, I don't think that Moskovitz funded Anthropic because his object level beliefs about their value or because they're such good pals, r... (read more)

Load more