All of Ben Pace's Comments + Replies

Remuneration In Effective Altruism

The charity section famously has lower salaries because the work is more intrinsically rewarding than regular corporate fare.

I thought it was because there's no profit to be made doing the work.

EAs should use Signal instead of Facebook Messenger

Nah, I am regularly wildly un-careful in my speech, so moving to Signal is a major benefit precisely for me.

Agree on UI though, the first time ppl text me I don't know who they are, and no photos for most of my contacts.

EAs should use Signal instead of Facebook Messenger

Happy to get behind this, I am always down to move to Signal. You can reach me there at five one oh, nine nine eight, four seven seven one (also a +1 at the front for US country code). (Please identify yourself when you text me.)

EAs should use Signal instead of Facebook Messenger

Pretty sure non-zero people have tried, my guess is the question is "how competent of an attacker and how much effort do they put into it".

Transcript of a talk on The non-identity problem by Derek Parfit at EAGxOxford 2016

It's nice to see this again <3 

I asked Parfit to give this talk at that EAGxOxford, a conference Jacob Lagerros and I were the lead organizers of [edit: I see James Aung posted this, who was on the team too!]. It was one of the last talks of his life. I remember writing him an email about what talk to give, and he wrote a very long word document back as an attachment. He was a very careful thinker.

Also I remember a pretty endearing interaction between him and Anders Sandberg, where Anders pretended to be a fan and got Parfit to sign a copy of his book. (It was a joke because Anders and Parfit were former roommates and good friends.)

4Gavin2mo
In the Q&A after this talk, Sandberg asked "What is the moral relevance of Apple laptops booting half a second slower?" (since on Parfit's simple view of aggregation, with millions of devices, this is equivalent to a massive loss of life). I always thought Parfit was being rude by ignoring the question, but your comment makes it seem more like joshing.
On Deference and Yudkowsky's AI Risk Estimates

I think chapter 4, The Kinetics of an Intelligence Explosion, has a lot of terms and arguments from EY's posts in the FOOM Debate. (I've been surprised by this in the past, thinking Bostrom invented the terms, then finding things like resource overhangs getting explicitly defined in the FOOM Debate.)

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Yeah, well, I haven't thought about this case much, so maybe there's some good counterargument, but I think of personal attacks as "this person's hair looks ugly" or "this person isn't fun at parties", not "this person is not strong in an area of the job that I think is key". Professional criticism seems quite different from personal attacks, and I hold different norms around how appropriate it is to bring up in public contexts.

Sure, it's a challenge to someone to be professionally criticized, and can easily be unpleasant, but it's not irrelevant or off-topic and can easily be quite valuable and important.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Hi, can you give an example of a speculative personal attack in the post that you're referring to?

7Jeff Kaufman2mo
How about: I read this as a formal and softened way of saying "Chloe made avoidably bad grants because she wouldn't do the math". Different people will interpret the softening differently: it can come across either as "hey maybe this could have been a piece of what happened?" or "this is totally what I think happened, but if I say it bluntly that would be rude".
4Aaron Gertler2mo
Thanks for this feedback. The horizontal scroll is a matter of having long email addresses on those page, and I'll clean that up after checking with page owners. Agree with info density dropping on the grants page — I think there's an easy improvement or two to be made here (e.g. removing the "Learn More" arrow), which I'll be aiming to make as the new site owner (with input from others at OP).
Announcing the launch of Open Phil's new website

Habryka left a lot of the relevant comments. My main positive is the separation of blogposts and research reports, I think that is likely pretty helpful when looking just for the high-effort research. My main negative was the information density decrease on the grants page, a page for a few years of my life I used to check regularly. Comparing on iPad right now with the way back machine, I used to see 8 grants on a page, but now I only see 2, so a 4x reduction.

2Ben Pace2mo
Feedback that the following page had like 1-2 letters width of horizontal scroll when I loaded on iPad. https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-summer-fellowships/ [https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-summer-fellowships/] Added: this page too: https://www.openphilanthropy.org/open-philanthropy-course-development-grants/ [https://www.openphilanthropy.org/open-philanthropy-course-development-grants/]
Results from the First Decade Review

Took me a while to find where you got your 2x+y from, I see it's visible if you highlight the cells in the sheet.

Here's a sheet with the score as sorted by the top 1k people, which is what I was interested in seeing: https://docs.google.com/spreadsheets/d/1VODS3-NrlBTnSMbGibhT4M2FpmfT-ojaPTEuuFIk9xc/edit?usp=sharing

Results from the First Decade Review

Feedback: I tried and failed on my phone to read the voting results by the ranking of how people voted. I don’t know what weighting is used in the spreadsheet so the ordering feels monkeyed-with.

2Charles He3mo
Can you write a bit more about what you mean? What voting results? Why would it be obvious that you could back this out? I don’t remember the details but I remember thinking the quadratic voting formula seemed sort of “underdetermined” and left room for “post processing”, but I read this as the “designer” wasn’t confident and leaving room to get well behaved results (as opposed schemes of outright manipulation).
Why CEA Online doesn’t outsource more work to non-EA freelancers

(Someone told me this comment read as hostile to them; FYI I thought it was a funny series of thoughts that I had, no hostility meant at all!)

3Charles He3mo
I didn't think this was hostile at all.
2Ben_West3mo
FWIW I didn't interpret it is hostile, though I did change the title to make it more clear that I'm not suggesting CEA change
2Linch3mo
I also did not interpret it as hostile fwiw (though I'm not Ben/CEA)
Why CEA Online doesn’t outsource more work to non-EA freelancers

I saw this title and assumed someone was making a public criticism of CEA.

Then I saw it was written by a present CEA staff member. 

And I thought "Wow, creative way to get changes made at your organization." :D

4Ben Pace3mo
(Someone told me this comment read as hostile to them; FYI I thought it was a funny series of thoughts that I had, no hostility meant at all!)
A retroactive grant for creating the HPMoR audiobook (Eneasz Brodski)?

If I were Thomas Kwa right now I would be offering Eneasz $10,000 for 5% of his impact certificate for making the HPMOR podcast.

1Jack R3mo
Ha!
Is working for Meta a good or bad option?

Ah, this is the true meta trap for EAs.

I burnt out at EAG. Let's talk about it.

Woop, thank you for true but contrary datapoints.

I burnt out at EAG. Let's talk about it.

I had three on my first day and then was emotionally done. I remember thinking "to all other people, I can either cry with joy at what you say, or cry in frustration, but no other responses are available right now".

It involved (for me) gearing up a ton of context and interest in one person, finding something critical to say with them, and then they were gone and it was happening again.

I mean, maybe we were all just being dumb and should handle it better. I also wonder if there's some natural way for event organizers to be like "there are set break periods where we stop 1-1s from being booked" or something, though probably that's a bad solution and there's a better one.

I'll just say from the other side that at EAG x Oxford I had a lot of 1-1s and didn't find it stressful; I'm really extroverted and get a lot of energy from things like this. I don't never need a break or want to escape, but the burnout thing is less common for me.

Neil Buddy Shah has been appointed CEO of the Clinton Health Access Initiative

After the mixup with CLR and CLTR, I can't believe there are also now two CHAI's that will sometimes be discussed on the EA Forum.

At least these ones involve very different cause areas, so should be obvious from context (as contrasted with two organisations that work on long-term risk where AI risk is a focus).

Also, have some pity for the Partnership on AI and the Global Partnership on AI. 

FTX/CEA - show us your numbers!

Well, you don’t have to be any more, because now it’s Jessica McCurdy’s reply.

3Jack Lewars4mo
Indeed - and to be clear, I wasn't trying to suggest that you shouldn't have made the comment - just that it's very secondary to the substance of the post, and so I was hoping the meat of the discussion would provoke the most engagement.
FTX/CEA - show us your numbers!

To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.

FTX/CEA - show us your numbers!

Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.

I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.

Candidly, I'm a bit dismayed that the top voted comment on this post is about clickbait.

2acylhalide4mo
See also [https://forum.effectivealtruism.org/posts/W8ii8DyTa5jn8By7H/the-vultures-are-circling?commentId=hJquFY8nfbMj8wmHc#8xDZANBXTCS9wodw5] :

To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.

The Vultures Are Circling

I remember hearing that the money was just for the person and I felt alarmed, thinking that so many random people in my year at school would've worked their asses off to get $50k — it's more than my household earned in a year. 

Sydney told me scholarships like this are much more common in the US, then I updated that it's only to be paid against college fees which is way more reasonable. But I guess this is kind of ambiguous still? Does seem like it's two radically different products.

Update from Open Philanthropy’s Longtermist EA Movement-Building team

Thanks! The core thing I'm hearing you say is that the scholarships are the sort of thing you wouldn't fund on a cost-effectiveness metric and 80k is, but that on a time-effectiveness metric that changes it so that the scholarships are now competitive.

No, that's not what I'd say (and again, sorry that I'm finding it hard to communicate about this clearly). This isn't necessarily making a clear material difference in what we're willing to fund in many cases (though it could in some), it's more about what metrics we hold ourselves to and how that leads us to prioritize.  

I think we'd fund at least many of the scholarships from a pure cost-effectiveness perspective. We think they meet the bar of beating the last dollar, despite being on average less cost-effective than 80k advising, because 80k advisi... (read more)

Update from Open Philanthropy’s Longtermist EA Movement-Building team

Great post.

I didn't quite parse this paragraph:

For example, when we fund e.g. 80,000 Hours, we (amongst other activities) support their full-time advisors to advise interested people about how to have more impactful careers. With our scholarship programs, we’re also trying to cause people to spend more time on more impactful activities. But rather than do this via the 80k advisors, our scholarship programs use money “directly” (without much intermediating EA labor) to try to make impactful careers more accessible and attractive. In general, we think we get

... (read more)

Hm yeah, I can see how this was confusing, sorry!

I actually wasn't trying to stake out a position about the relative value of 80k vs. our time. I was saying that with 80k advising, the basic inputs per career shift are a moderate amount of funding from us and a little bit of our time and a lot of 80k advisor time, while with scholarships, the inputs per career shift are a lot of funding and a moderate amount of our time, and no 80k time. So the scholarship model is, according to me, more expensive in dollars per career shift, but less time-consuming of ded... (read more)

My personal reading of the post is that they think the scholarship decisions don't take up a lot of time, relative to 80k advisory stuff.

Introducing 80k After Hours

This is excellent branding.

New EA Cause Area: Run Blackwell's Bookstore

Beaten to the punch by a big established player! Grr, I'll not forget this one Waterstones. Someday I'll have my own publishing company and you'll rue the day you bought Blackwell's from under me...

New EA Cause Area: Run Blackwell's Bookstore

Your very own Swiss-army coal-mine! It can also be used as a hidden lair for secret planning, a well-heated winter home, and if you make a couple of changes to your strength training, a place to turn your personal exercise/workouts into valuable coal that you can sell for money.

New EA Cause Area: Run Blackwell's Bookstore

Wait — what use do you have in mind for a coal-mine?

You can reduce carbon emissions but ceasing mining, in a nuclear war you could hide in it, and in a post-apocalyptic world it would provide a good source of energy.

New EA Cause Area: Run Blackwell's Bookstore

Added a note just below the epistemic status.

New EA Cause Area: Run Blackwell's Bookstore

You're welcome.

Re hours: maybe? Personally I only imagine that being true for someone who's worked in this sort of retail before. If you haven't, and expect do a good job, then I reckon you'll be scrambling to get oriented and execute for at least several months if not the first year. Especially so if it's a business in decline and you're working to pull it out.

5Josh Jacobson6mo
Strong disagree (I’d probably go so far to say that my prior on someone who plans to work 60 hours per week on this is lower that my prior on someone intending to work 40 hours), but doesn’t seem worth debating.
New EA Cause Area: Run Blackwell's Bookstore

+1. SSC argued that there was not enough money in politics, and I wonder to what extent the same argument applies to academic publishers. How much would it cost to buy top journals in every field? How much would it take to by Nature, or Science?

SSC argued that there was not enough money in politics

To be clear, SSC argued that there was surprisingly little money in politics. The article explicitly says "I don’t want more money in politics".

6finm6mo
Noting that this is a question I'm also interested in
6MichaelPlant6mo
Indeed. No idea on the numbers, but my hunch would be buying (some part of) Elsevier, or one of the other academic publishers, would be more cost-effective than buying a coalmine - another bold-but-maybe-not-actually-crazy megaproject.
New EA Cause Area: Run Blackwell's Bookstore

Yeah, this is the most likely reason to not ahead. Someone else suggested Blackwell's would have signed some legal agreement to not publish further, which would be a pretty severe obstacle.

I'm interested to understand why the publishing house is valued 10x the bookstore, I don't know why the book-publishers would make 10x-50x what the book-sellers do.

FTX EA Fellowships

Both forms say "This form can only be viewed by users in the owner's organisation."

3FTX Foundation10mo
fixed now hopefully?
The LessWrong Team is now Lightcone Infrastructure, come work with us!

We've discussed the consultancies a fair bit in the team, I'd love to have consultants at the Bay Area Lightcone Office who can do high quality lit reviews or help make websites or whatever else there's demand for amongst the members.

I've not read the other post, sounds interesting.

Buck's Shortform

Something I imagined while reading this was being part of a strangely massive (~1000 person) extended family whose goal was to increase the net wealth of the family. I think it would be natural to join one of the family businesses, it would be natural to make your own startup, and also it would be somewhat natural to provide services for the family that aren't directly about making the money yourself. Helping make connections, find housing, etc.

3capybaralet1y
Reminds me of The House of Saud (although I'm not saying they have this goal, or any shared goal): "The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family's wealth measure their net worth at $1.4 trillion" https://en.wikipedia.org/wiki/House_of_Saud
EA Infrastructure Fund: May 2021 grant recommendations

Yeah, I think you understand me better now.

And btw, I think if there are particular grants that seem not in scope from a fund, is seems totally reasonable to ask them for their reasoning and update pos/neg on them if the reasoning does/doesn't check out. And it's also generally good to question the reasoning of a grant that doesn't make sense to you.

EA Infrastructure Fund: May 2021 grant recommendations

Though it still does seem to me like those two grants are probably better fits for LTFF.

But this line is what I am disagreeing with. I'm saying there's a binary of "within scope" or not, and then otherwise it's up to the fund to fund what they think is best according to their judgment about EA Infrastructure or the Long-Term Future or whatever. Do you think that the EAIF should be able to tell the LTFF to fund a project because the EAIF thinks it's worthwhile for EA Infrastructure, instead of using the EAIF's money? Alternatively, if the EAIF thinks someth... (read more)

3MichaelA1y
Ah, this is a good point, and I think I understand where you're coming from better now. Your first comment made me think you were contesting the idea that the funds should each have a "scope" at all. But now I see it's just that you think the scopes will sometimes overlap, and that in those cases the grant should be able to be evaluated and funded by any fund it's within-scope for, without consideration of which fund it's more centrally within scope for. Right? I think that sounds right to me, and I think that that argument + re-reading that "Fund Scope" section have together made it so that I think that EAIF granting to CLTR and Jakob Lohmar just actually makes sense. I.e., I think I've now changed my mind and become less confused about those decisions. Though I still think it would probably make sense for Fund A to refer an application to Fund B if the project seems more centrally in-scope for Fund B, and let Fund B evaluate it first. Then if Fund B declines, Fund A could do their own evaluation and (if they want) fund the project, though perhaps somewhat updating negatively based on the info that Fund B declined funding. (Maybe this is roughly how it already works. And also I haven't thought about this until writing this comment, so maybe there are strong arguments against this approach.) (Again, I feel I should state explicitly - to avoid anyone taking this as criticism of CLTR or Jakob - that the issue was never that I thought CLTR or Jakob just shouldn't get funding; it was just about clarity over what the EAIF would do.)
EA Infrastructure Fund: May 2021 grant recommendations

Yeah, that's a good point, that donors who don't look at the grants (or know the individuals on the team much) will be confused if they do things outside the purpose of the team (e.g. donations to GiveDirectly, or a random science grant that just sounds cool), that sounds right. But I guess all of these grants seem to me fairly within the purview of EA Infrastructure?

The one-line description of the fund says:

The Effective Altruism Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their

... (read more)
5MichaelA1y
Yeah, good point that these grants do seem to all fit that one-line description. That said, I think that probably most or all grants from all 4 EA Funds would fit that description - I think that that one-line description should probably be changed to make it clearer what's distinctive about the Infrastructure Fund. (I acknowledge I've now switched from kind-of disagreeing with you to kind-of disagreeing with that part of how the EAIF present themselves.) I think the rest of the "Fund Scope" section helps clarify the distinctive scope: Re-reading that, I now think Giving Green clearly does fit under EAIF's scope ("Raise funds or otherwise support other highly-effective projects"). And it seems a bit clearer why the CLTR and Jakob Lohmar grants might fit, since I think they partly target the 1st, 3rd, and 4th of those things. Though it still does seem to me like those two grants are probably better fits for LTFF. And I also think "Conduct research into prioritizing [...] within different cause areas" seems like a better fit for the relevant cause area. E.g., research about TAI timelines or the number of shrimp there are in the world should pretty clearly be under the scope of the LTFF and AWF, respectively, rather than EAIF. (So that's another place where I've accidentally slipped into providing feedback on that fund page rather than disagreeing with you specifically.)
EA Infrastructure Fund: May 2021 grant recommendations

The inclusion of things on this list that might be better suited to other funds (e.g the LTFF) without an explanation of why they are being funded from the Infrastructure Fund makes me slightly less likely in future to give directly to the  Infrastructure Fund and slightly more likely to just give to one of the bigger meta orgs you give to (like Rethink Priorities).
 

I think that different funders have different tastes, and if you endorse their tastes you should consider giving to them. I don't really see a case for splitting responsibilities... (read more)

5weeatquince1y
Tl;dr: I was to date judging the funds by the cause area rather than the fund managers tastes and this has left me a bit surprised. I think in future I will judge more based on the fund mangers tastes. Thank you Ben – I agree with all of this Maybe I was just confused by the fund scope. The fund scope is broad and that is good. The webpage [https://funds.effectivealtruism.org/funds/ea-community] says the scope includes: "Raise funds or otherwise support other highly-effective projects" which basically means everything! And I do think it needs to be broad – for example to support EAs bringing EA ideas into new cause areas. But maybe in my mind I had classed it as something like "EA meta" or as "everything that is EA aligned that would not be better covered by one of the other 3 funds" or similar. But maybe that was me reading too much into things and the scope is just "anything and everything that is EA aligned". It is not bad that it has a broader scope than I had realised, and maybe the fault is mine, but I guess my reaction to seeing the scope is different to what I realised is to take a step back and reconsider if my giving to date is going where I expect. To date I have been judging the EAIF as the easy option when I am not sure where to give and have been judging the fund mostly by the cause area it gives too. I think taking a step back will likely involve spending an hour or two going though all of the things given in recent fund rounds and thinking about how much I agree with each one then deciding if I think the EAIF is the best place for me to give, and if I think I can do better giving to one of the existing EA meta orgs that takes donations. (Probably I should have been doing this already so maybe a good nudge). Does that make sense / answer your query? – – If the EAIF had a slightly more well defined narrower scope that could make givers slightly more confident in where their funds will go but has a cost in terms of admin time and flexibility
2[comment deleted]1y

I find this perspective (and its upvotes) pretty confusing, because:

  • I'm pretty confident that the majority of EA Funds donors choose which fund to donate to based far more on the cause area than the fund managers' tastes
    • And I think this really makes sense; it's a better idea to invest time in forming views about cause areas than in forming views about specifically the funding tastes of Buck, Michelle, Max, Ben, and Jonas, and then also the fund management teams for the other 3 funds.
  • The EA Funds pages also focus more on the cause area than on the fund mana
... (read more)
Draft report on existential risk from power-seeking AI

Thanks for the thoughtful reply.

I do think I was overestimating how robust you're treating your numbers and premises, it seems like you're holding them all much more lightly than I think I'd been envisioning.

FWIW I am more interested in engaging with some of what you wrote in in your other comment than engaging on the specific probability you assign, for some of the reasons I wrote about here.

I think I have more I could say on the methodology, but alas, I'm pretty blocked up with other work atm. It'd be neat to spend more time reading the report and leave ... (read more)

2anonymous_ea1y
This links to A Sketch of Good Communication, not whichever comment you were intending to link :)
Draft report on existential risk from power-seeking AI

I tried to look for writing like this. I think that people do multiple hypothesis testing, like Harry in chapter 86 of HPMOR. There Harry is trying to weigh some different hypotheses against each other to explain his observations. There isn't really a single train of conditional steps that constitutes the whole hypothesis.

My shoulder-Scott-Alexander is telling me (somewhat similar to my shoulder-Richard-Feynman) that there's a lot of ways to trick myself with numbers, and that I should only do very simple things with them. I looked through some of his post... (read more)

Hi Ben, 

A few thoughts on this: 

  • It seems possible that attempting to produce “great insight” or “simple arguments of world-shattering importance” warrants a methodology different from the one I’ve used here. But my aim here is humbler: to formulate and evaluate an existing argument that I and various others take seriously, and that lots of resources are being devoted to; and to come to initial, informal, but still quantitative best-guesses about the premises and conclusion, which people can (hopefully) agree/disagree with at a somewhat fine-grain
... (read more)

Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).

I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of som... (read more)

Load More