Wanting to leave a breadcrumb here for other EAs interested in the leading edge of network thinking.
What this article describes is a beautiful introduction to something quite complex and powerful. We do this work naturally - it is indeed something that emerges in our system by itself. By increasing our understanding of how it works, we can increase our awareness of the networks we participate in, and intentionally shift our behaviours to change the properties of the network.
In my experience, the theory is great, yet studying the theory doesn't really give one the skill to actually do the thing. It's a bit like learning to ride a bike, or learning how to have really good conversations, or learning how to show up with both warmth and competence at the same time - it takes a good learning container to practice in and coaching/support from someone with greater awareness and skill.
Here's the breadcrumb: there is a network of folks with a ton of capacity in this area, already playing within EA and communities beyond. In several communities (though not directly within EA yet), we actually offer training on developing these skills. If you find this note and are interested in connecting, I'd love to hear from you! If we aren't connected directly, I'm sure you could use your network to find me :)
ThinkBetter was founded by five EAs in Toronto with the mission of creating a scalable rationality training program, and the goal of materially raising the global sanity waterline. Through a series of rapid-prototyping and OODA loops, we ended up 'transcending and including' our initial curriculum and strategy, and are now working with a deeper understanding of the complexity of the challenge.
I'd be interested in starting a discussion to share:
Thanks so much for posting about your experience, I anticipate your tips will help me improve my strategy for incorporating EA concepts at the large corporation I work in. I'll chime in with my own experiences in case they are also helpful to others.
I work in a Canadian company with 50k employees. In early 2019, I reached out to our company's charitable giving team, expressing an interest in helping run events to increase charitable giving engagement within my part of the business. This invitation was met with enthusiasm and support, over 2 conference calls and several emails. I didn't explicitly mention EA, just that I was well connected and had a fun systematic way for looking at charitable giving.
As we approached the giving campaign period in the fall, I reached out again with an exciting proposal to run a Giving Game, and asked if it could be included as an official charitable giving campaign event. This didn't end up working out (reasons are opaque to me, a few emails went by without response), but I'm hopeful for 2020. Instead, I invited folks from my network to attend, and we had a really good 10 person Giving Game.
This was the first one that I've run, and it seemed to land really well with the attendees. One key aspect was to show employees how they could donate to effective charities through RC Forward, directly from their paycheck. I hope to leverage their testimonials to support whatever proposal I come up with this year.
I think there is a lot of potential to incorporate EA concepts into a greater conversation at my company, and see two paths forward:
1. Grow a grass-roots conversation by finding people who are enthusiastic enough about EA to actually form a core team. Currently it's just me by myself, and this seems like a work-intensive long-term goal.
2. Shortcut the process by building a stronger relationship with our charitable giving team. Changes made by this team could be very high leverage - anything from changing the company matching program to include high impact charities (we don't), to tweaking default donation options and search functionality, to a paradigm shift in how folks view doing good.
I'm also playing with system-wide influence through a new role that I've taken on, which could transform the company culture. It's still early days, but I'm making meta-moves to create a community that increases empathy, connection, and systematic (rational) thinking.
Hoping my story is helpful for folks here. I'm interested in hearing more anecdotes from anyone else who's looking at EA from the context of a corporation.
Very cool summary, I've sent this to a few groups I'm a part of. I'm selfishly hoping it will lead to even better gatherings in my circles in the future!
Hi Michael and team,
Thanks for thinking about this topic - I agree that this is an important update for the community, and I think you gave it the treatment it required.
I think the puzzle of wealth/income vs SWB is an interesting one. The finding that relative wealth plays a role in SWB made sense - and leads me to hypothesize that countries with lower inequality would be happier.
I found a meta-analysis on the topic which couldn't find a strong correlation. "The association between income inequality and SWB is weak, complex and moderated by the country economic development." - https://www.ncbi.nlm.nih.gov/pubmed/29067589
It is interesting to think about the reduction in happiness due to a neighbour getting a cash transfer (the spillover effect mentioned in source 21).
Could this be due to jealousy decreasing one's happiness? Do we need secret cash transfers?
Does the reverse also hold true - if your neighbours become poorer, does that make one happier?
Seems dangerous to generalize these findings, but this area of research would be quite applicable to the conversations on basic income.
It's a bit of a rabbit-hole, but wondering if you've seen any research that speaks to this?
Great to see this being looked at. Do you have any examples of this method in use? I'd be interested to see various animals and situations ranked using this method - as it could provide a baseline to quantify the benefits of various interventions.
I also attempted to create my own method of comparing animal suffering while I was calculating the value of going vegetarian. I'll provide a quick summary here, and would love to hear if anyone else has tried something similar.
The approach was to create an internally consistent model based upon my naive intuitions and what data I could find. I spent a while tuning the model so that various trade-offs would make sense and didn't lead to incoherent preferences. It is super rough, but was a first step in my self-examination of ethics.
The output of this rough model was to value various animal lives as a percentage of human lives - a more salient/comparable measure for me.
This model was built over about 5 hours and is still updating as I have more conversations around animal suffering. Would love to hear if anyone else tried a different strategy!
Pretty cool idea - since I'm new to EA, I hope this will become a neat snapshot for me to look back on in a few years to see how far I've come.
Growing up, I believe I was raised to be a decent member of society - be kind to others, don't litter, help those in need. I never really thought explicitly about ethics, or engaged deeply with any causes. Sure, I'd raise money for cancer at "Relay for Life", but it wasn't because I thought the $100 dollars would make a difference - more because it would be fun to have a camp-out with friends.
In my twenties, my goal was primarily to make money to retire early so I could travel, and maybe volunteer my time to help increase financial literacy, or apply my career experience in a not-for-profit. Fairly ephemeral goals though - I also considered becoming a full-time music producer.
When I was 28, I found Less Wrong from a link my friend posted on Facebook. Over the next two years I read every essay in the Rationality sequences, supplemented by a healthy amount of psychology/economy/math/self-help style audiobooks. Reading that material was an enjoyable journey and lead to a few minor epiphanies.
Seriously - the last book "Becoming Stronger" and the sequence "Challenging the Difficult" really motivated me to think much larger than I had before. Discovering 80,000 Hours around the same time was a great template to follow.
In May 2018 I attended my first EA meet-up. I recall thinking, "Wow! There are actually other rationalists out there". Up until that point, I'd never really met others who thought or spoke similarly, let alone a whole room full of them. I'm currently enjoying the learning curve, finding more questions than answers.
I'm currently working with a team of amazing EAs towards my top cause priority, and hope to launch this autumn.
What a coincidence - I just started reading the book "The Happiness Advantage" by Shawn Achor. While I'm only on the second chapter, the gist seems to be: Happiness is not a product of success, but rather a precursor. Happy people are more likely to succeed.
If this premise is true, then I think positive psychology would have an edge over stoicism, when looking forward. Stoicism might be a better technique when thinking about events in the past.
Neutral evaluation of things you cannot change, but a focus on the future states that you prefer. I wish I could have some actual evidence to back this up, but this way of thinking has worked for me so far.
Hey Jamie, thanks for linking me up with those additional resources - it's a refreshing perspective on the topic after combing through so many non-EA articles.
Continuing the conversation from your blog post on impact investing, I really like the perspective that the appeal of impact investing depends on how funding-constrained a cause or company is. If they have no problem raising money for free or at low cost, they have no need to promise a high return. Inversely, in a place where it is hard to raise capital, companies should be more willing to offer higher returns to attract investment. For someone who is interested in this area, it may still be better to offer a donation rather than take the money, but if you think there are better causes then you could invest for medium good + high profit, then take the earnings to a cause with even better social utility.
From your general thoughts:
1) I'm trying to extrapolate this concept out to a general thought about donating vs investing in general. The hard question looks something like this:
If you compare your best cause/charity vs an index fund earning 7%, under what circumstances are you ambivalent between directing your money to either?
I don't have an answer to that question for myself, but here is the sidestep:
Finding either a better charity or a better investment opportunity aught to change your preference
If the market is efficient, and any social good tagged onto the investment would reduce the financial return, then you'd be wise not to invest in any impact investment whose social utility was worse than your best charity.
If you think markets are inefficient and it's possible earn greater than average returns (by skill), or if you think the charity market is inefficient (less worthy causes get more funding than your most worthy cause), then you'd theoretically be able to find impact investments that would benefit you.
2) I do think it would be really cool to have an EA themed impact fund. Offer an investment vehicle that targets at-market returns while investing in particularly effective cause areas. I'd set it up where the fund invested in securities that matched the preferences of the investors. If half the investors really valued animal rights, 50% of the holdings would be in that area. I wonder if any of the 250 EAs in the FB group have any expertise in setting up something like this...
Re: Specific suggestions:
I'm not super up on my knowledge of charity evaluations, but for climate change, it seems that the common currency is $/CO2 tonne. For World Tree, the estimate looks like this:
1 acre costs $2500 CAD, and sequesters 103 tonnes/year (I wasn't able to find a third party # on this). Lifespan of the trees is 50 years, for a total of 5150 tonnes per acre.
I'm having a bit of a rationality crisis here though; Halstead recently posted the new research on climate change charities, which found that the Coalition for Rainforest Nations can reduce carbon for an estimated $0.12/tonne. Should I cancel my World Tree investment and additionally take out a loan to fund this initiative since it is so much more effective? It's really tough being half a rationalist... I want to do good now but also good in the future.
Next steps: figure out my own utility function, while searching for those sweet impact investments that the market has overlooked.
I think you hit the nail on the head - the current offering of impact investment platforms and offerings for a retail investor is fairly uninspiring. Can they stack up against the best EA charitable causes? Odds are against it.
I did allocate some of my retirement funds (currently in equity) towards buying a few acres with World Tree, which I think is a step in the right direction - more impact, and likely higher returns (ask me again in 10 years). I know mental bucketing of finances is some kind of bias, but keeping my charitable donations separate from my retirement fund will lead me to a future of financial security, rather than donating everything to my favourite charity today.
From a utilitarian view - I'd love to hear more perspectives on the trade off between traditional investing vs charitable giving. Is this an optimization problem? Or is there a strong argument against one or the other?