Research @ MIT FutureTech/Monash University/Ready Research
3107 karmaJoined Dec 2015Working (6-15 years)Sydney NSW, Australia



Visiting Scientist at MIT FutureTech helping with research, communication and operations. Doing some 'fractional movement building'. 

On leave from roles as i) a behaviour change researcher at BehaviourWorks Australia at Monash University and ii) EA course development at University of Queensland.

Founder and team member at Ready Research.

Former movement builder for i) UNSW, Sydney, Australia, ii) Sydney, Australia and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Founder and former lead for the EA Behavioral Science Newsletter.

See my LinkedIn profile for more of my work.

Leave (anonymous) feedback here.


A proposed approach for AI safety movement building


Topic contributions

Thanks for following up! This evidence you offer doesn't persuade me that most EAs are extremely rich guys because it's not arguing that. Did you mean to claim that most EAs who are rich guys are not donating any of their money or more than the median rich person? 

I also don't feel particularly persuaded by that claim based on the evidence shared. What are the specific points that are persuasive in the links - I couldn't see anything particularly relevant from scanning them. As in nothing that I could use to make an easy comparison between EA donors and median rich people. 

I see that "Mean share of total (imputed) income donated was 9.44% (imputing income where below 5k or missing) or 12.5% without imputation." for EAs and "around 2-3 percent of income" for US households" which seems opposed to your position. But I haven't checked carefully and I am not the kind of person who makes these sorts of careful comparisons very well.

I don't have evidence to link to here, or time to search for it, but my current beliefs are that most of EAs funding comes from rich and extremely rich people (often men) donating their money.  

Thanks for the input!

I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.

It's true that he doesn't mention donations. I don't think that discounts his alignment in other ways.

Useful to know he might not be genuine though.

Also someone messaged me about a recent controversy that Bryan was involved in. I thought he had been exonerated but this person thought that he had still done bad things.


And his response

Worth knowing about when judging his character.

Yeah I think that's part of it. I also thought it was very interesting how he justified what he was doing as being important for the long term future given the expected emergence of superhuman AI. E.g., he is running his life by an algorithm in expectation that society might be run in a similar way.

I will definitely say that he does come across as hyper rational and low empathy in general but there's also some touching moments here where he clearly has a lot of care for his family and really doesn't want to lose them. Could all be an act of course.

Thanks for sharing your opinion. What's your evidence for this claim?

Yeah, he could be planning to donate money once his attempt to reduce our overcome mortality is resolved.

He said several times that what he's doing now is only part one of the plants so I guess there is a opportunity to withhold judgment and see what he does later.

Having said all that I don't want to come across as trusting him. I just heard the interview and was really surprised by all the EA themes which emerged and the narrative he proposed for why what he's doing is important

Thanks for this, I appreciate that someone read everything in depth and responded. 

I feel I should say something because I defended Nonlinear (NL) in previous comments, and it feels like I am ignoring the updated evidence/debate if I don't.

I also really don’t want to get sucked in, so I will try to keep it short:

How I feel
I previously said that I was very concerned after Ben's post, then persuaded by the response from NL that they are not net negative. 

Since then, I realized that there have been more negative views expressed towards NL than I realized. I have been somewhat influenced by the credibility of some of the people who disagree with me.

Having said that, the current evidence is still not enough to persuade me that NL will be net-negative if involved in EA in the future. They may have made some misjudgments, but they have paid a very high price, and it seems relatively easy to avoid similar mistakes in the future.

(I also feel frustrated that I have put so much time into this and wish it was not a public trial of sorts)

I agree with this point:

"An effective altruist organization needs to be actively good."

BUT I am not sure if you can reasonably conclude that NL is not actively good from the current balance of evidence because it is fuzzy and incomplete. Little effort has been invested in figuring out their positive impacts and weighing them against their negative impacts.

Contrary to the post, I expect that Kat and Emerson probably do agree with this now (as general principles) after their failed experiment here:

People should be paid for their work in money and not in ziplining trips.

A person should not simultaneously be your friend, your housemate, your mentee, an employee of the charity you founded, and the person you pay to clean your house.
Nonprofits should follow generally accepted accounting principles and file legally required paperwork.
Employers should not ask their employees to commit felonies unrelated to the job they were hired for.[16]

I disagree somewhat with Ozy/Jeff on this:
"Much of Kat and Emerson’s effective altruist work focuses on mentoring young effective altruists and incubating organizations. I believe the balance of the evidence—much of it from their own defense of their actions—shows that they are hilariously ill-suited for that job. Grants to Nonlinear to do mentoring or incubation are likely to backfire. Grantmakers should also consider the risk of subsidizing Nonlinear’s mentoring and incubation programs when they give grants to Nonlinear for other programs (e.g. its podcast)."

NL's misjudgements here show a really bad fit for their chosen role in connecting and mentoring people." 

- again, we are just hearing about the bad stuff here from a small minority of people involved in a program. What about the various people who had good experiences? We might be considering less than a percent of the available feedback, etc.

I think it is fairer to say that they made a mistake here that reflects badly and raised concern etc. That we should investigate more and be concerned maybe, but not that we should have strong confidence etc.

I think I agree with Jeff on this:
"We need to do the best we have with the information we have, put things together, gather more information as needed, and make a coherent picture."

What I therefore think should happen next:
NL should acknowledge any mistakes, say what they will do differently in the future, and be able to continue being part of the community while continuing to be scrutinized accordingly (however that proceeds).

I'd like people to be more cautious than previously in their engagements with them, but not to write them off completely or assume they are bad actors.  (EA) organizations/people can and do make mistakes and then improve and avoid those mistakes in the future. 

Thanks for the detailed response, I appreciate it!

Load more