I sometimes argue against certain EA payment norms because they feel extractive,
or cause recipients to incur untracked costs. E.g. "it's not fair to have a
system that requires unpaid work, or going months between work in ways that
can't be planned around and aren't paid for". This was the basis for some of
what I said here. But I'm not sure this is always bad, or that the alternatives
are better. Some considerations
1. if it's okay for people to donate money I can't think of a principled reason
it's not okay for them to donate time -> unpaid work is not a priori bad.
2. If it would be okay for people to solve the problem of gaps in grants by
funding bridge grants, it can't be categorically disallowed to self-fund the
time between grants.
3. If partial self-funding is required to do independent, grant-funded work,
then only people who can afford that will do such work. To the extent the
people who can't would have done irreplaceably good work, that's a loss, and
it should be measured. And to the extent some people would personally enjoy
doing such work but can't, that's sad for them. But the former is an
empirical question weighed against the benefits of underpaying, and the
latter is not relevant to impact.
1. I think the costs of blocking people who can't self-fund from this kind
of work are probably high, especially the part where it categorically
prevents segments of society with useful information from participating.
But this is much more relevant for e.g. global development than AI risk.
4. A norm against any unpaid work would mean no one could do anything unless
they got funder approval ahead of time, which would be terrible.
5. A related problem is when people need to do free work (broadly defined, e.g.
blogging counts) to get a foot in the door for paid work. This has a lot of
the same downsides as requiring self-funding, but, man, seems pretty stupid
to insist on ignorin
The Happier Lives Institute have helped many people (including me) open their
eyes to Subjective Wellbeing and perhaps even update us to the potential value
of SWB. The recent heavy discussion (60+ comments) on their fundraising thread
disheartened me. Although I agree with much of the criticism against them, the
hammering they took felt at best rough and perhaps even unfair. I'm not sure
exactly why I felt this way, but here are a few ideas.
* (High certainty) HLI have openly published their research and ideas, posted
almost everything on the forum and engaged deeply with criticism which is
amazing - more than perhaps any other org I have seen. This may (uncertain)
have hurt them more than it has helped them.
* (High certainty) When other orgs are criticised or asked questions, they
often don't reply at all, or get surprisingly little criticism for what I and
many EAs might consider poor epistemics and defensiveness in their posts (for
charity I'm not going to link to the handful I can think of). Why does HLI
get such a hard time while others get a pass? Especially when HLI's funding
is less than many of orgs that have not been scrutinised as much.
* (Low certainty) The degree of scrutiny and analysis of some development orgs
in general like HLI seems to exceed that of AI orgs, Funding orgs and
Community building orgs. This scrutiny has been intense- more than one
amazing statistician has picked apart their analysis. This expert-level
scrutiny is fantastic, I just wish it could be applied to other orgs as well.
Very few EA orgs (at least that have been posted on the forum) produce full
papers with publishable level deep statistical analysis like HLI have at
least attempted to do. Does there need to be a "scrutiny rebalancing" of
sorts. I would rather other orgs got more scrutiny, rather than development
orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare
it with ot
Surprised Animal Charity Evaluators Recommended Charity Fund gives equal amounts
to around a dozen charities:
https://animalcharityevaluators.org/donation-advice/recommended-charity-fund/
Obviously uncertainty's involved, but a core tenant of EA and charity evaluators
is that certain charities are more effective, so Givewell's Top Charities Fund
giving different amounts to only a few charities per year makes more sense to
me:
https://www.givewell.org/top-charities-fund
Frugality did not reduce my productivity but made my social life harder
In the early years of my EA journey, I tried to live on a small budget so I
could donate more. I learned that I could be productive on a small budget.
There were times I worked on an old laptop. Some actions might have taken a few
seconds longer, and I did not have much screen space. It was fine. What matters
most about productivity is to do the right things, not to do the things slightly
faster.
I exercise to keep my mind fresh. I don't go to the gym or take sports classes.
I just do a bodyweight workout at home. Completely free. I also cook my own
meals. I can only spend so many hours working behind a computer screen. Ordering
food delivery or buying pre-prepared food does not save me time.
The biggest problem with frugality is socializing. To meet people, I need to
travel and participate in the activities that they do. Sometimes it may be
better to not be too frugal.
For example, my team works in the office one day per week. We have lunch in a
restaurant - which is quite expensive where I live. When I joined the team, I
brought my own food and ate it alone in the office. I felt unhappy about this.
After a while, I decided to join and spend a lot of money on the "unnecessary
luxury" of not socially excluding myself.
Welcome to the effective giving subforum!
This is a dedicated space for discussions about effective giving.
Get involved:
* ❤️ Donate via Giving What We Can
* Join the discussion
* Share where you're donating this giving season — and why!
* Start a new thread in this subforum[1]
* Ask questions about donation decisions
* Discuss strategic considerations about giving
* Explore other opportunities for donating or raising money
* Explore updated giving recommendations from GiveWell, Animal Charity
Evaluators, Giving What We Can, and Happier Lives Institute
* Book an effective giving talk at your workplace
* Give the Forum team feedback about this beta subforum
* Reach us at forum@centreforeffectivealtruism.org or comment on this post.
1. ^
Threads can be casual! This will only appear in this subforum or for people
who've joined the subforum.
BOAS is requesting funding to scale up our vintage fashion platform that donates
all (non-reinvested) profits to effective charities. If you are interested we
can share the pitch deck with you, or email me on vin at boas . co
Disclaimer: I couldn't find the policy on making to the community aware of a
funding requests. I sometimes see these requests, but I erred on the side of
caution by doing a quick take instead of a post. Please let me know if funding
requests on the forum are not appropriate and we'll take it down.
Here are the key details:
* BOAS is a reverse auction vintage fashion platform that donates all profits
to effective charities.
* BOAS aims to make buying vintage fashion 10 times faster than Vinted/Ebay and
4 times cheaper than Zalando
* User friendly iOS, Android apps
* BOAS donates all (non-reinvested) profits to effective organizations with a
mission to save one million kids’ lives (this is the simple message we
communicate to customers)
* BOAS and this Profit for Good (PFG) model are endorsed by Rutger Bregman,
‘the Dutch wunderkind of new ideas’, TED speaker and double NYT bestseller
* Research indicates people prefer buying from and working for PFG's that
donate profits to charities, so they grow faster and more profitable than
for-profit competitors, possibly making them more effective at multiplying
money than traditional philanthropy. There is also research that finds no
statistically significant increase in profitability
* BOAS is pioneering with a reverse auction in online retail, with a surge in
revenue per visit of 500% when they launched the feature (more research on
RA's driving higher revenues and prices)
* Unlike competitors, BOAS ensures quality and product popularity by purchasing
vintage fashion in bulk. This means you can confidently know what you're
getting and make purchases within minutes
* BOAS is currently seeking €596K in grants, equity, convertible notes, or
loans. T
Published: Who gives? Characteristics of those who have taken the Giving What We
Can pledge
The paper I worked on with Matti Wilks for my thesis was published! Lizka
successfully did her job and convinced me to share it on the forum.
I'm sharing this here, but I probably won't engage with it (or comments about
it) too seriously as a heads up --- this was a project I worked on a few years
ago and it's not super relevant to me anymore.
Have there ever been any efforts to try to set up EA-oriented funding
organisations that focus on investing donations in such a way as to fund
high-utility projects in very suitable states of the world? They could be pure
investment vehicles that have high expected utility, but that lose all their
money by some point in time in the modal case.
The idea would be something like this:
For a certain amount of dollars, to maximise utility, to first order, one has to
decide how much to spend on which causes and how to distribute the spending over
time.
However, with some effort, one could find investments that pay off conditionally
on states of the world where specific interventions might have very high
utility. Some super naive examples would be a long-dated option structure that
pays off if the price for wheat explodes, or a CDS that pays off if JP Morgan
collapses. This would then allow organisations to intervene through targeted
measures, for example, food donations.
This is similar to the concept of a “tail hedge” - an investment that pays off
massively when other investments do poorly, that is when the marginal utility of
owning an additional dollar is very high.
Usually, one would expect such investments to carry negatively, that is, to be
costly over time possibly even with negative unconditional expected returns.
However, if an EA utility function is sufficiently different from a typical
market participant, this need not be the case, even in dollar terms (?).
Clearly, the arguments here would have to be made a lot more rigorous and
quantitative to see whether this might be attractive at all. I’d be interested
in any references etc.