All of Giles's Comments + Replies

EAGxVirtual Unconference (Saturday, June 20th 2020)

A Mindful Approach to Tackling those Yucky Tasks You’ve Been Putting Off

For many of us, procrastination is a problem. This can take many forms, but we’ll focus on relatively simple tasks that you’ve been putting off long-term.

Epistemic status: speculative, n=1 stuff.

Yucky Tasks

Yucky tasks may be thought of several ways:

  • things you’ve been putting off
  • tasks which generate complex, negative emotions.
  • that vague thing that you know is there but it's hard to get a grip on and you’re all like uhggggg

The connection to EA?

EA i... (read more)

Introducing the EA Funds

That's great, but the less actively I'm involved in the process the more likely I am to just ignore it. That might just be me though.

Introducing the EA Funds

This is great!! Pretty sure I'd be giving more if it felt more like a coordinated effort and less like I have to guess who needs the money this time.

I guess my only concern is: how to keep donors engaged with what's going on? It's not that I wouldn't trust the fund managers, it's more that I wouldn't trust myself to bother researching and contributing to discussions if donating became as convenient as choosing one box out of 4.

We plan to have some reporting requirements from fund managers although we don't yet know how much. What would you be interesting in seeing?
I'm assuming people who donated to the fund would get periodic notifications about where the money's being used.
Is the community short of software engineers after all?

This by the way is what certificates of impact are for, although it's not a practical suggestion right now because it's only been implemented at the toy level.

The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).

Why we need more meta

Are you counting donations from people who aren't EAs, or are only relatively loosely so?

Yes. Looking at the survey data was an attempt to deal with this.

Why we need more meta

I was also hesitant about CFAR, although for a slightly different reason - around half its revenue is from workshops, which looks more like people purchasing a service than altruism as such.

Good point regarding GPP: policy work is another of those grey areas between meta and non-meta.

Not sure about 80K: their list of career changes mostly looks like earning to give and working at EA orgs - I don't see big additional classes of "direct work" being influenced. It's possible people reading the website are changing their career plans in entirely diff... (read more)

On the survey, those who prefer meta, I guess. Some of the money going into the meta orgs comes from non-EAs too. With e.g.3 I meant that GiveWell is also influencing the nonprofit sector beyond just the recommended charities. Arguably you could include that as part of the direct charity estimate. With 80k, there's also a bunch of career changes (~20-30% of the total) that are towards building career capital (which has a similar problem to accumulating pledged donations).
Its meta in Hurford's sense, which is different from Todd's - it's indirect, and has a chain of causality to impact that has extra points of failure. That's what many of Hurford's arguments spoke to. GPP and 80K also count as meta by this definition. Are you counting donations from people who aren't EAs, or are only relatively loosely so? They can correct me if I'm wrong but Hurford didn't seem concerned about those. I don't know about the Oxford line, but the general feeling where I am and among international EA's I've talked to is that the survey tells us more about the people who are more engaged in the international community, identify more as EA's and participate online, are more dedicated, etc. Most other sources confirm that these people _do_ particularly favour meta, that many came from the large old LessWrong community, that they're heavily consequentialist, etc. Naturally finding out about and establishing contact with as many other people as possible would also be valuable, including less engaged random GWWC and even GW donors. I don't know about GWWC Central, but my local chapter plans to help get next survey to as many people as possible.
Why we need more meta

I can't emphasize the exponential growth thing enough. A look at the next page on this forum shows CEA wanting to hire another 13 people. Meanwhile GiveWell were boasting of having grown to 18 full time staff back in March; now they have 30.

But the direct charities are growing like crazy too! It all makes it very easy to be off by a factor of 2 (and maybe I am in my above reasoning) simply by using out of date figures. Anyone business-minded know about the sort of reasoning and heuristics to use under growth conditions?

Small aside: CEA are just advertising for 13 positions, but they're very unlikely to hire for all of those positions. I expect CEA to hire more like 6-7 people, spread over 6-12 months (since hiring takes a long time, and we run rounds about once a year). Note that's spread over 4 mainly independent projects.
This. I haven't talked to him personally, but that's the sort of thing that has some of us who made his article one of the most upvoted ever [] worried about a meta trap, where organisations keep adding jobs for EAs they know without in advance setting out credible limits for when this should stop.
Just look at the split in 2014 and then again in 2015 and see if the ratio is changing fast.
Why we need more meta

I'm helping prepare a spreadsheet listing organizations and their budgets, which at some point will be turned into a pretty visualization...

Anyway, according to this sheet, meta budgets total around $4.2m (that's $2.1m GiveWell, $0.8m CEA and $0.8m CFAR, plus a bunch of little ones). That's more than "a couple", but direct charities' budgets total $52m so we're still shy of 10%.

(Main caveats to this data: It's not all for exactly the same year, so anything which is taking off exponentially will skew it. Also I haven't checked the data particularl... (read more)

Nice data. I'm a bit unsure about whether CFAR should be classed as "EA meta". You could see it as a whole other cause which is improving decision making. Only part of what it's doing is trying to improve the EA movement. Also note that we're undercounting the amount of direct work being influenced if we just look at this year's direct charity budget. e.g. GPP (part of CEA) mainly advises policy makers rather than helping the direct charities. e.g.2. 80k helps people choose careers which normally aren't at direct charities. e.g.3. Some of GiveWell's research will likely be used by people outside of those working at direct charities. e.g.4. GWWC is also raising money that will be donated in the future. Did you also include all of Open Phil's grants in your direct charity estimate? You should do that, or only include the proportion of GiveWell's funding that's spent on "traditional GiveWell". I think the EA survey likely has a strong selection bias in favor of those who prefer meta. There's lots of random GiveWell and GWWC donors who'll never fill that out.
I can't emphasize the exponential growth thing enough. A look at the next page on this forum shows CEA wanting to hire another 13 people [] . Meanwhile GiveWell were boasting of having grown to 18 full time staff back in March; now they have 30 []. But the direct charities are growing [] like crazy [] too! It all makes it very easy to be off by a factor of 2 (and maybe I am in my above reasoning) simply by using out of date figures. Anyone business-minded know about the sort of reasoning and heuristics to use under growth conditions?
Direct Funding Between EAs - Moral Economics

Multiple donors could form coalitions to fund a single donee

Or to fund multiple donees.

EA Facebook New Member Report

Let me know if you're expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.

Certificates of impact

I'm guessing that for these to work, the ownership of certificates should end up reflecting who actually had what impact. I can think of two cases where that might not be so.

Regret swapping:

  • Person A donates $100 to charity X. Person B donates $100 to charity Y.
  • Five years later they both change their minds about which charity was better. They swap certificates.

So person A ends up owning a certificate for Y, and person B ends up owning a certificate for X, even though neither of them can really be said to have "caused" that particular impact.... (read more)

Moving Moral Economics Forward

I've just found out that Paul Christiano and Katja Grace are already buying certificates of impact.

Room for more funding: Why doesn’t the Gates foundation just close the funding gap of AMF and SCI?

Just one comment: the essay asks "Why doesn’t the Gates foundation just close the funding gap of AMF and SCI?" but doesn't seem to offer an answer. The closest seems to be 3b/c which suggests it's a coordination problem or donor's dilemma: everyone is expecting everyone else to fund these organizations.

If that's the case, the relevant question would seem to be: what does the Gates foundation want? If the EA community finds something that GF wants that we can potentially offer (such as new high-risk high-return charities doing something totally innovative), then we can potentially do a moral trade with them.

Moving Moral Economics Forward

Oh one other thing - I think the trickiest part of this system will be verifying whether someone has actually donated to a charity at the time they said they did. Every charity does it a different way.

Moving Moral Economics Forward

I'm interested in moving moral economics forward in a different way: by creating some kind of online "moral market" and seeing what your happens.

There are two possible systems I could implement:

I'll describe the points-based system here, as it's the one I've thought through a bit more. I presume it theoretically diverges from a certificate of impact system, but I haven't thought through exactly how.

Users have points. The total number of poi... (read more)

Oh one other thing - I think the trickiest part of this system will be verifying whether someone has actually donated to a charity at the time they said they did. Every charity does it a different way.
Iason Gabriel writes: What's Wrong with Effective Altruism

I'm a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): "they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare."

It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can't necessarily assume many people in EA agree with me though.

There's also some confusion... (read more)

Interesting. My view is that EAs do tend to view these things as valuable only insofar as they serve wellbeing, at least in their explicit theorising and decision-making. That's my person view anyway. I'd add the caveat that I think most people actually judge according to a more deontological folk morality implicitly when making moral judgements (i.e. we actually do think that fairness, and our beliefs being right, are important). I think this varies a bit by cause area though. For example (and this is not necessarily a criticism) the animal rights (clue's in the name) section seems much more deontological.
Iason Gabriel writes: What's Wrong with Effective Altruism

There's another response that EAs could have to the priority/ultrapoverty strand, which is to bend their utility functions so that ultrapoverty is rated as even more bad, and improvements at the ultrapoverty end would be calculated as more important. Of course, however concave the utility function is, you can still construct a scenario where the people at the ultrapoverty end would be ignored.

Iason Gabriel writes: What's Wrong with Effective Altruism

I think that the priority/ultrapoverty strand of this argument is one place where you can't ignore nonhuman animals. My intuition says that they're among the worst off, and relatively cheap to help.

Iason Gabriel writes: What's Wrong with Effective Altruism

My first thought on reading the "Two villages" thought experiment was that the village that was easier to help would be poorer, because of the decreasing marginal value of money. If this was so, you'd want to give all your money to the poorer one if your goal was to reduce "the influence of morally arbitrary factors on people's lives".

On the other hand that gets reversed if the poorer village is the one that's harder to help. In that case fairness arguments would still seem to favour putting all your money in one village, just the opposite one to what consequentialists would favour. (So that this problem can't be completely separated from the Ultrapoverty one).

Iason Gabriel writes: What's Wrong with Effective Altruism

One thing I find interesting about all the thought experiments is that they assume a one donor, many recipient model. That is, the morality of each situation is analyzed as if a single agent is making the decision.

Reality is many donors, many recipients and I think this affects the analysis of the examples. Firstly because donors influence each others' behaviour, and secondly because moral goods may aggregate on the donor end even if they don't aggregate on the recipient end. I'll try and explain with some examples:

Two villages (a): each village currently ... (read more)

Preventing human extinction

I have a minor philosophical nitpick.

No sane person would say, “Well, the risk of a nuclear meltdown at this reactor is only 1 in 1000

There are (checks Wikipedia) 400ish nuclear reactors, which means if everyone followed this reasoning, the risk of a nuclear meltdown would be pretty high.

Existential risks with low probabilities don't add up in the same way. It's my belief that the magnitude of a risk equals the badness times the probability (which for xrisk comes out to very, very bad) but not everyone might agree with me, and I'm not sure the nuclear reactor example would convince them.

Should your choice of charity change based on how much money you have?

some of the Gates Foundation work is higher impact than GiveWell top charities

Hasn't GiveWell also said that large orgs tend to do so many different things that some end up being effective and others not? Does this criticism apply to the Gates Foundation?

I think GiveWell was initially much more critical of the Gates Foundation than they are today. Perhaps this is because during the OPP they found that what Gates does is (1) very difficult [] and that (2) Gates (or other foundations/govs) had already funded [] many of the most promising opportunities. It's probably best to evaluate the GF like a venture capitalist rather than on a project by project basis.
I suspect that such a criticism does apply. I remember a friend criticizing the way the Bill and Melinda Gates Foundation funded charter schools and scholarships as ineffective. You can see some of the grants they have awarded here [].
Stuck? Talk to an EA Buddy!

I've got 16 people on the list and nominally made 5 pairings. In a while I'll prod people to see if they're actually talking to each other.

January Open Thread

I think you're imagining a scenario where every organization either:

  • is not seriously addressing existential risk, or
  • has run out of room for more funding

One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn't feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful s... (read more)

Comments on Ernest Davis's comments on Bostrom's Superintelligence

OK, finished draft done. Sorry for posting it by accident earlier!

Comments on Ernest Davis's comments on Bostrom's Superintelligence

You're absolutely right. I've changed that bit in the final draft.

January Open Thread

What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism

I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don't. Currently the thing you have to know is "there's this thing called EA and earning to give". As that meme spreads, you'd expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.

The number of earning-to-givers * average good done by earning to g... (read more)

I don't consider this rambling. I didn't grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out! It seems to me the bottleneck here isn't the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn't a good giving opportunity. This scenario isn't too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
January Open Thread

Hi Anonymous,

Really sorry to hear that you feel like that. I'm glad you find writing about it therapeutic. One thing you can try - it's worked for me - is to write down a "toolbox" of things (such as writing) that allow you to feel better about yourself when you're feeling bad.

This could even include taking 1-2 hours to criticize yourself - if that's what works for you. But having other options might help. Writing them down somewhere visible can help too.

The reason I'm bringing this up is that - for me at least - the mindframe you describe isn't ... (read more)

January Open Thread

I was reading The Phatic and the Antic-Inductive on Slate Star Codex.

Why's this relevant?

Birthday and Christmas charity fundraisers of course!

There is a sense in which the concept of a birthday fundraiser is anti-inductive - if they worked, and everyone realised they worked, then a lot more people would be doing them and they wouldn't work so well any more.

But actually running a fundraiser feels more like phatic communication. You're really communicating very little information about the charity you want people to give money to, but people seem to apprecia... (read more)

I don't think that's quite true, as I don't think most people care enough to do them, whereas EAs of course do. Also, as I'm sure you know, it's not the case that everyone realises they work - people generally don't realise this unless a charity is shouting from the rooftops about this, like we (and you! []) have [] done [] at Charity Science. When charities do a bunch of people sign up - Charity Water is an example, and they've got enormous numbers of people to do birthday and Christmas fundraisers. That's right, people seem to generally be happy to follow your choice of charity even without reading detailed cost-effectiveness studies. Indeed that's what happens in most of fundraising.
Comments on Ernest Davis's comments on Bostrom's Superintelligence

Yes - I clicked on "save and continue" and what I got was "submit". I'd better get back to work on it, I guess!

0Denis Drescher8y
Yes, happened to me too. I thought it would save a version as private draft.
January Open Thread

I'll bite. It may take a new top-level post though.

I wanted to make a top-level post for it a few days ago but I need 5 more upvotes before I can create those. So I took the chance to share it here when I saw this "Open Thread".
January Open Thread

I'd suggest Global Catastrophic Risks as a good primer. (The essays aren't written by Bostrom; he co-edited the book)

The Outside Critics of Effective Altruism

I was googling "effective altruism arrogant" and it turned up a few links which I'm posting here so I don't lose them:

The Outside Critics of Effective Altruism

Thanks - I knew they were involved in the EA Summit but I didn't know they were the sole organizers. I also knew they weren't soliciting donations. I partially retract my earlier statement about them! (Also I hope I didn't cause anyone any offense - I've met them and they're super super nice and hardworking too)

The Outside Critics of Effective Altruism

Thanks - most of those names ring a bell but the Selfish Gene is the only one I've read. I guess some of the value of reading them is gone for me now that my mind is already changed? But I'll keep them in mind :-)

The Outside Critics of Effective Altruism

I don't know if this is relevant to the criticism theme, but I found it was necessary for me to take some of Hanson's ideas seriously before becoming involved in EA, but his insistence on calling everything hypocrisy was a turn-off for me. Are there any resources on how we evolved to be such-and-such a way (interested in self+immediate family, signalling etc.) but that that's actually a good thing because once we know that we can do better?

Off the top of my head: * The Selfish Gene by Richard Dawkins * The Origins of Virtue by Matt Ridley * Moral Tribes by Joshua Greene * Darwin's Dangerous Idea by Dennett * Freedom Evolves by Dennett * The Expanding Circle by Peter Singer They might mean that our evolved morality is "good" in a different sense than you're looking for. I haven't read them yet but The Ant and the Peacock, Moral Minds, Evolution of the Social Contract, Nonzero, Unto Others, and The Moral Animal are probably good picks on the subject.
The Outside Critics of Effective Altruism

However, I haven't seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism.

Would they do it if we paid them?

TLYCS Pamphleting Pilot Program

Definitely. Some of the team at least are EA insiders and lurking on this very forum, and they'll already know about TLYCS for sure.

1Peter Wildeford8y
We lurk amongggggg youuuuu.
The Outside Critics of Effective Altruism

Another criticsm: the movement isn't as transparent as you might expect. (Remember, GiveWell was originally the Clear Fund - started up not necessarily because existing charitable foundations were doing the wrong thing, but because they were too secretive).

When compiling this table of orgs' budgets, I found that even simple financial information was difficult to obtain from organizations' websites. I realise I can just ask them - and I will - but I'm thinking about the underlying attitude. (As always, I may be being unfair).

Also, what Leverage Research are... (read more)

Having met Geoff Anders, the executive director of Leverage Research, and its other employees multiple times, and taking it upon myself specifically to ask pointed questions attempting to clarify their work, I can informally relay the following information[1]: * Leverage Research has and continues to successfully raise funds for its own financial needs without doing broad-based outreach to the effective altruism community at large. Leverage Research seems confident in its funding needs for the future to the point at which they won't be sourcing funds from the effective altruism community at large anytime soon. * Given that Leverage Research considers itself an early-stage, non-profit research organization, whose research goals pivot rapidly as its researchers update their minds on what is the best work they can do in the face of new evidence and developments, Leverage Research perceives it as difficult to portray their research at any given time in granular detail. That is, Leverage Research is so dynamic an organization at this point that for it to maximally disclose the details of its current research would be an exhaustive and constant effort. * Because of the difficutly Leverage Research has in expressing its research agenda accurately and precisely at any point in time, and because they've sourced their funding needs from private donors who were provided information to their own satisfaction, Leverage Research doesn't perceive it as absolutely crucial that they make specific financial or organizational information easily accessible, e.g., on its website. Personally, I haven't ever privately contacted Leverage Research seeking a disclosure of, or access to, such information. I have no knowledge of how such interactions may or may not have gone between other third parties, and Leverage Research. * The information available under the 'Our Team' heading on Leverage Research's w
You can find all the data on 80k in our latest financial report and summary business plan: [] Some even more current updates also here:!forum/80k_updates [!forum/80k_updates]
0Peter Wildeford8y
If you ever find .impact or Charity Science insufficiently transparent, let me know. I think the reason why you might have trouble finding income and expenses for those two orgs is that their official income and expenses are both essentially $0. You can see some of the financial flow into both orgs here [].
The Outside Critics of Effective Altruism

"Giles has passed on some thoughts from a friend" is one of the things cited, so if a particular criticism isn't listed we can assume it's because Ryan doesn't know about it, not that it's inherently too low status or something. I definitely want to hear what your friends have to say!

TLYCS Pamphleting Pilot Program

Also, have you got in touch with the good people at Charity Science?

Just took a look at their website, very cool stuff. You suggesting I email them and get their feedback on our plan?
TLYCS Pamphleting Pilot Program

Great idea!

Does the pamphleting have to be done on Fridays, or can it be done on pseudo-random days? (I'm thinking about distinguishing the signal from the pamphlets from e.g. people spending more time on the Internet during weekends. Pseudo-random spikes might require fancier math to pick out though, and of course you need to remember which days you handed out pamphlets!

Can you ask people, when they take the pledge, how they found out about TLYCS? (This will provide an under-estimate, but it can be used to sanity-check other estimates). (Also it's a bit a... (read more)

Statistically, the situation you don't want to get into is leafleting every Friday so there's no Fridays left to provide your control condition.
Some really good points here. I never considered that handing out the leaflets only on Fridays might skew the results (I just happen to have every other Friday off, thanks California), I'll have to think that through. And it would definitely be a good idea to have a "Where did you hear about the pledge?" question on the pledge site, I'll check into that as well. I'm not sure what our initial run on the pamphlets will be, but I'm thinking in the 5K-15K range. I haven't done any analysis to figure out how many we'd need to hand out to get good statistics; not even really sure how to go about doing that, to be honest. And absolutely no idea what to expect in terms of a response rate. Any thoughts on how to estimate that?
The Outside Critics of Effective Altruism

Here's the link to the Facebook group post in case people add criticisms there.

Glad you linked to Holden Karnofski's MIRI post. Other possibly relevant posts from the GiveWell blog:

There are more on a similar philosophical slant (search for "explicit expected value") but the above seem the most criticismy.

The Outside Critics of Effective Altruism

Great topic!

I think you missed this one from Rhys Southan which is lukewarm about EA: Art is a waste of time says EA

I don't see the Schambra piece as particularly vitriolic.

I don't know where to find good outside critics, but I think there's still value in internal criticism, as well as doing a good job processing the criticism we have. (I was thinking of creating a wiki page for it, but haven't got around to it yet).

Some self-centered internal criticism; I don't know how much this resonates with other people:

  • I posted some things on LW back in 2011 which
... (read more)
Another fundraiser report

Is it working now? I wondered why I wasn't getting more karma ;-)

Is anybody else having problems with the image upload feature of the forum?

It's working now. If you post it to the EA Facebook, then more people may vote it up
Problems and Solutions in Infinite Ethics

there's going to be some optimal level of abstraction

I'm curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:

Also, I know that I'd really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.

Making an expected-utilons-per-dollar calculator is an interesting project. Cause prioritisation in the broader sense can obviously fit on this forum and for that there's also: 80,000 Hours [], Cause prioritisation wiki [] and Open Philanthropy Project []. If you're going for max number of years of utility per dollar, then you'll be looking at x-risk, as it's the cause that most credibly claims an impact that extends far in time (there aren't yet credible "trajectory changes"). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
Figuring Good Out - Launch Thread

Note: I didn't actually give this a go.

0Peter Wildeford8y
Next time?
Another fundraiser report

As a separate point, I'm not sure what % of unrestricted donations to GiveWell go to its own operations as opposed to being granted to its recommended charities.

Load More