Ben Pace

Wiki Contributions

Comments

FTX EA Fellowships

Both forms say "This form can only be viewed by users in the owner's organisation."

The LessWrong Team is now Lightcone Infrastructure, come work with us!

We've discussed the consultancies a fair bit in the team, I'd love to have consultants at the Bay Area Lightcone Office who can do high quality lit reviews or help make websites or whatever else there's demand for amongst the members.

I've not read the other post, sounds interesting.

Buck's Shortform

Something I imagined while reading this was being part of a strangely massive (~1000 person) extended family whose goal was to increase the net wealth of the family. I think it would be natural to join one of the family businesses, it would be natural to make your own startup, and also it would be somewhat natural to provide services for the family that aren't directly about making the money yourself. Helping make connections, find housing, etc.

EA Infrastructure Fund: May 2021 grant recommendations

Yeah, I think you understand me better now.

And btw, I think if there are particular grants that seem not in scope from a fund, is seems totally reasonable to ask them for their reasoning and update pos/neg on them if the reasoning does/doesn't check out. And it's also generally good to question the reasoning of a grant that doesn't make sense to you.

EA Infrastructure Fund: May 2021 grant recommendations

Though it still does seem to me like those two grants are probably better fits for LTFF.

But this line is what I am disagreeing with. I'm saying there's a binary of "within scope" or not, and then otherwise it's up to the fund to fund what they think is best according to their judgment about EA Infrastructure or the Long-Term Future or whatever. Do you think that the EAIF should be able to tell the LTFF to fund a project because the EAIF thinks it's worthwhile for EA Infrastructure, instead of using the EAIF's money? Alternatively, if the EAIF thinks something is worth money for EA Infrastructure reasons, if the grant is probably more naturally under the scope of "Long-Term Future", do you think they shouldn't fund the grantee even if LTFF isn't going to either?

EA Infrastructure Fund: May 2021 grant recommendations

Yeah, that's a good point, that donors who don't look at the grants (or know the individuals on the team much) will be confused if they do things outside the purpose of the team (e.g. donations to GiveDirectly, or a random science grant that just sounds cool), that sounds right. But I guess all of these grants seem to me fairly within the purview of EA Infrastructure?

The one-line description of the fund says:

The Effective Altruism Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

I expect that for all of these grants the grantmakers think that they're orgs that either "use the principle of effective altruism" or help others do so.

I think I'd suggest instead that weeatquince name some specific grants and ask the fund managers the basic reason for why those grants seem to them like they help build EA Infrastructure (e.g. ask Michelle why CLTR seems to help things according to her) if that's unclear to weeatquince.

EA Infrastructure Fund: May 2021 grant recommendations

The inclusion of things on this list that might be better suited to other funds (e.g the LTFF) without an explanation of why they are being funded from the Infrastructure Fund makes me slightly less likely in future to give directly to the  Infrastructure Fund and slightly more likely to just give to one of the bigger meta orgs you give to (like Rethink Priorities).
 

I think that different funders have different tastes, and if you endorse their tastes you should consider giving to them. I don't really see a case for splitting responsibilities like this. If Funder A thinks a grant is good, Funder B thinks it's bad, but it's nominally in Funder B's purview, this just doesn't seem like a strong arg against Funder A doing it if it seems like a good idea to them. What's the argument here? Why should Funder A not give a grant that seems good to them?

Draft report on existential risk from power-seeking AI

Thanks for the thoughtful reply.

I do think I was overestimating how robust you're treating your numbers and premises, it seems like you're holding them all much more lightly than I think I'd been envisioning.

FWIW I am more interested in engaging with some of what you wrote in in your other comment than engaging on the specific probability you assign, for some of the reasons I wrote about here.

I think I have more I could say on the methodology, but alas, I'm pretty blocked up with other work atm. It'd be neat to spend more time reading the report and leave more comments here sometime.

Load More