Sorted by New

Wiki Contributions


Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021)

I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just aren't very sexy. Specialization can be good but there are  topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I don't think it is due to impact estimates.

I don't read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts  of EA AI safety orgs. Either way, this has made me more interested in reading through

Open Thread: May 2021

first, not condoning bill's behavior. My intuition is that it is good to be trustworthy, not sexually harass anyone, etc. That being said, I didn't find any of the arguments linked particularly convincing. 

"In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it." 

Sure generally you shouldn't be a jerk, but generally being kind isn't mutually exclusive to achieving goals. Beyond that what does 'overlay' mean? The statement is quite vague, and I'm actually sure there is some bar of family event that he would skip. I'm sure 99%+  of his work w/ givewell is not time sensitive in the way a family event is, so this statement somewhat amounts to a perversion of opportunity cost. In fact, Holden even says in the blog that nothing is absolute. It's potentially presentist also because I would love for people to treat me with respect and kindness, but I would probably prefer if past people just built infrastructure. 

And again with julia's statement, she's just saying "Because we believe that trust, cooperation, and accurate information are essential to doing good". Ok, that could be true but isn't that the core of the questions we are asking- When we talk about these types of situations we are to some extent asking: is it possible x person or group did more good by not being trustworthy, cooperative, etc. Maybe this feels less relevant for EA research, but what about EAs running businesses? Microsoft got to the top with extremely scummy tactics, and now we think bill gates may be on of the greatest EAs ever, which isn't supposed to be a steel counterargument but I'm just pointing out its not that hard to spin a sentence that contradicts that point. And to swing back to the original topic, it seems extremely unlikely that sexually harassing people is ever essential or even helpful to having more impact, so it seems fair to say don't sexually harass people, but not under the grounds that "you should always default to standard generosity, only overlaying your biased agenda on top of the first level generosity." However, what about having an affair? What if he was miserable and looking for love. If the affair made him .5% more productive, there is at least some sort of surface level utilitarian argument in favor. The same for his money manager, If he thought Larson was gonna make .5% higher returns then the next best person, most of which is going to high impact charity stuff, you can once again spin a (potentially nuance-lacking) argument in favor. And what is the nuance here? Well the nuance is about how not being standardly good affects your reputation, affects culture, affects institutions, hurts peoples feelings, etc.  

*I also want to point out that julia is making a utilitarian backed claim, that trust, etc. are instrumentally important while Holden is backing some sort of moral pluarlism (though maybe also endorsing the kindness/standard goodness as instrumental hypothesis).

So while I agree with Holden and Julia generally on an intuitional level, I think that it would be nice if someone actually presented some sort of steelmanned argument (maybe someone has) for what types of unethical behavior could be condoned, or where the edges of these decisions lied.  The EA brand may not want to be associated with that essay though. 

It feels a bit to me like EAs are often naturally not 'standardly kind' or at least are not utility maximizing because they are so awkward/bad at socializing (in part due to the standard complaints about dark-web, rational types) which has bad affects on our connections and careers as well as EAs general reputation, and so Central EA is saying, lets push people in the direction so that we have a reputation of being nice rather than thinking critically about the edge cases because it will put our group  more at the correct value of not being weirdos and not getting cancelled(+ there are potentially more important topics to explore when you consider that being kind is a fairly safe bet). 

AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy

 One of the other comments here says there might be some evidence of microdosing not doing much. One of my friends swears that a 'hero's journey' is orders of magnitude more impactful or effective than simply doing a normal dose. 1. Is there research being done on heavy one time usage? 2. If it turned out the most effective way to use psychedelics was to use a large amount at once, would this be politically feasible?

Our plans for hosting an EA wiki on the Forum
  1. I can't find the exact location right now, but someone on LW made a web visualization of EA academic papers - lines between papers representing citations. I was thinking something like this could be done for the forum in general with hyperlinks but it might be cooler to do it with the wiki. The thought behind it outside of just being a cool visualization is that many thoughts come in clusters and being able to visualize the thoughtspace you're in might help you breakthrough plateaus more easily and visualize how things connect within ea.
  2. more of an open question but I think its relevant to think about how atomic you make the pages, as in how much ideas are embedded/hyperlinked vs written out in full.
Our plans for hosting an EA wiki on the Forum

What would be worthy of an up vs down? I was thinking something along this line also though, but my thought was rank them based on didactic potential according to an SNT framework - If you think it is a really important concept(S) but not many people know about it(N), and people would be interested if they did find out (T), this is the highest priority page. 

Is this what you meant by best books or were you just thinking rank them by how much you liked them?

Deference for Bayesians

I thought your moderate drinking point was very interesting and connected some dots in my head. It seems plausible that the vast majority of causal relations are mild. If this is the case the majority of causality could be ‘occurring’ through effects too small to call significant. I guess that could seem pretty obvious but it isn’t something I ever heard talked about in my econometrics class nor in my RAing.

2020 Top Charity Ideas - Charity Entrepreneurship

Thanks for the post. Good to see some investment in the risk loving side of things. However, I am a little disappointed that none of these charities are long-term related or meta. I'm not super hardlined, but there is soft consensus in the EA community that these things are important. Just wondering if anyone knows why charity-entrepreneurship doesn't prioritize these things? I could see the argument that it is hard to run a long-term focused charity, though I haven't thought much on it. Is there another incubator that focuses on these areas? Otherwise it seems like a really promising area to push for.

As a side note, I agree with Misha that I could see decentralized mental health type things being cause-y, would love to see more done in this area. Anyone looking in this direction might want to check out , which tries to apply behavioral research to help people "stick" to their goals.