All of cflexman's Comments + Replies

I really appreciate you looking into this topic. I think you want to have much much bigger error bars on these, however. Interventions like this are known to have massive selection effects and difficulty with determining causality—giving point estimates is kind of sweeping under the rug the main thing that I'm interested in regarding whether these interventions work.

For example, ACE had a problem similar to this when it was beginning. For one of the charities, they relied on survey data to look for an effect and gave estimates of how effective intervention... (read more)

9
Akhil
1y
Hi cflexman, I think these are valuable comments, and you are absolutely correct. Limited time meant that I (1) was very short-hand in how I aggregated effect sizes/results from academic studies, (2) used simplistic point estimates. Ideally, I would have done a meta-analysis style method with risk of bias assessment etc. My main limitation is a frustrating one- time. I did try and caveat that with trying to make all my shorthands and uncertainties explicit, but I dont think I quite succeeded at that. One area I would push back on is the comments regarding social interventions and survey data- the methods  in most/all these studies are survey effects asking women wehther they have experienced violence in the last year. To me, this seems pretty robust, and as long as the surveys are conducted to a high standard with low risk of bias (which most of the studies have dedicated sections to explain how they tried to do this, to varying degrees of success),  think this is credible and internally valid data.

A similar position to David's might be that bioethics institutions are bad for the world, while being agnostic about academia. I don't know much about academic bioethicists and you might be right that their papers as a whole aren't bad for the world. But bioethics think tanks and NGOs seem terrible to me: for example, here's a recent report I found pretty appalling (short version, 300-page version).

Looks like a great idea, very glad someone is pursuing the roll-up-your-sleeves method here.

I think the best addition to this that you could make is a business plan—basically, how much would it cost to replicate how many studies, how would you best choose studies for replication to maximize efficiency / impact, how much / how long until you were replicating 1 or 10% of top studies, etc. I'd also personally like to see a different version of "what has been achieved" that didn't lean as much on collaborations / work of collaborators, as I find these basically meaningless.

1
Michael_Wiebe
2y
The budget section (omitted here) has more of these details. Re: selection, the idea is systematically replicating all new research in top journals, to change researchers' expectations from (a) expecting to have basically no one scrutinize their work to (b) expecting at least some post-publication  review. This incentivizes researchers to improve the quality of their work. Re: collaborators, I4R currently works by asking academics to volunteer to replicate papers.

This seems like a really great thing to try at small scale first. Seems important to have a larger vision but make Little Bets to start, as Peter Sims or Cal Newport would say. You don't want to start with 30+ people with serious expertise at 90% likelihood of conversion because you want to anneal into a good structure, not bake your early mistakes into the lasting organizational culture. (Maybe you had already planned on this but seems worth clarifying as one of the most common mistakes made by EAs.)

4
Gavin
2y
Great point. We haven't made any irreversible decisions: we're letting the new director (someone with actual ops chops) design the sample path.

Single issue lobbying group called "2%", perhaps. Or 5% if NGDP.

Some other possible takeaways that I would lean toward:

  • Try to fund groups which will pivot on their advocacy faster
  • Fund advocacy of the opposite, now
  • Go further and try funding or creating a think tank that is actually committed to targets instead of unidirectional force
9
Hauke Hillebrandt
2y
Yes, I agree that there are drawbacks to funding 'single issue'-ish lobbying groups that can't pivot. For instance, if your org's name is 'EmployAmerica'  (which is OpenPhil-funded) and then you see even Krugman saying yesterday that the 'job market is running unsustainably hot. Cooling that market off will probably require accepting an uptick in the unemployment rate' then it's hard to pivot and argue against employment... so they started using awkward language last week: Whereas if your groups name would be "Optimal macroeconomic policy" then that might be better. But then perhaps the groups focus is not as value aligned with the funder because different people have different ideas about what optimal means. I actually noticed that when I crowdfunded for Center for Clean Energy Innovation at ITIF, which is a think tank focused on, well, clean energy innovation. The policy they work on that I think the most effective is Mission Innovation, an initiative to coordinate clean energy R&D spending globally (which incidentally just got  honorable mention in the FTX future fund ideas thread.) But because the Center is run by an academic, they also do other things in the area of clean energy innovation, which might dilute their impact (or, probably they know better than me because it's run by an academic with a lot of experience in this field). It's seems a bit too late... Central banks already pivoting and I think the advocacy groups are coming around to it. Yes, maybe one should consider fund nominal GDP targeting initiatives like the Koch brothers. Or more generally. 

I definitely agree re antitrust, it seems like a slam-dunk. If I have time after this case I was thinking about trying to slowly reach out to try to elicit an American version from someone, or finding out why that's not on the table. I've been made quite aware of how much I don't know about ongoing projects in this space.

I did email ~20 of them about drafting amicus briefs and didn't get any takers; plausibly they would be down to give some sort of lesser help if you had ideas for what to ask for.

Good idea, I'll forward this. I'm focusing on US/Western profs for now because A) many Indian institutes are already involved, and India's profs seem to know about the case, and Sci-Hub's lawyers are much better connected there, and B) I think international/Western backing is an important source of clout diversification. Many Indian Supreme Court cases actually cite American amici as an important legal source.

I think backups of Sci-Hub would be a good idea if you can find any legal avenues to create them. I'm not sure if that's very tractable, and it doesn't appear to be all that neglected (though these are probably mostly in illegal jurisdictions).

Re scientific progress, I agree that it's not obviously a good thing, but after thinking about this extensively with little resolution, my conclusion is roughly: given that we cannot reasonably learn enough to resolve this uncertainty, and we can't coordinate on acting as if scientific progress is a negative thing, a... (read more)

I don't think the issue is that we don't have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people's pattern-matching minds associate their entire movement with the worst example.

Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don't tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.

0
capybaralet
7y
Sure, but the examples you gave are more about tactics than content. What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don't appreciate the issue. I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions. TBC, I'm not saying we are lacking in radicals ATM, the level is probably about right. I just don't think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.

I really want to pull good insights out of this to improve the movement. However, the only thing I'm really getting is that we should think more about systemic change, which a) already seems to be the direction we're moving in and b) doesn't seem amenable to too much more focus than we are already liable to give it, i.e., we should devote some resources but not very much. My first reaction was that maybe Doing Good Better should have spent a little bit of time mentioning why this is difficult, but it's a book, and really had to make sacrifices when choosin... (read more)

3
Benjamin_Todd
9y
I think that might be fair. I was thinking more last night about what behaviour I'd actually change in light of this, and wasn't thinking of many concrete actions. The main area would be to improve how we talk about cause selection so people don't think we're ignoring the issues she raises.

I think it's very good Matthews brought this point up so the movement can make sure we remain tolerant and inclusive of people mostly on our side but differing in a few small points. Especially those focused on x-risk, if he finds them to be most aggressive, but really I think it should apply to all of us.

That being said, I wish he had himself refrained from being divisive with allegations that x-risk is self-serving for those in CS. Your point about CS concentrators being "damned if you do, damned if you don't" is great. Similarly, the point (yo... (read more)

If big donors feel better and donate more, I'm not convinced that is a neutral thing. If running a matching donation drive doesn't get more donations from the matchees but does pull more money from the matchers, that may have a fairly large effect. I have certainly thought about donating more money than I otherwise would have when I heard it could be used to run a matching fundraiser. If they truly don't attract more matchee funds then I suppose it is epistemically unvirtuous to ask matchers to donate, since this implies it has an effect, but nonetheless a... (read more)

1
Ben Kuhn
9y
Not sure if I made this clear in the post, but I'm looking at matching from the perspective of a potential matching donor, not from the charity's perspective. From the perspective of a (purely rational) matching donor, you shouldn't be concerned with whether the match pulls more money from you.

I find another motte-and-bailey situation more striking: the motte of "make your donations count by going to the most effective place" and the bailey of "also give all your money!"

I personally know a lot of people who have been turned off of effective altruism by the bailey here, and while some seem to disagree with the motte, they are legions fewer. In the discussion about how to present EA to those we know, I think in many circumstances I'd recommend sticking with the motte, especially until you know they are very on board with that and perhaps come up with the bailey on their own.

Has anyone done an EA evaluation of (formerly B612) Sentinel Mission's expected value?

1
RyanCarey
9y
Not Sentinel Mission is particular, but some work has been done on asteroids. Basically, the the value of asteroid surveillance for reducing extinction risk is small as we have already identified basically all of the >1km asteroids, and that's the size that they would need to be to cause an extinction-level catastrophe. That's to say nothing of the prospects for learning to intercept asteroids, or the prospects of preventing events that fall short of an extinction-level threat. The other thing to note here is that we've survived asteroids for lots of geological time (millions of years), so it would be really surprising if we got taken out by a natural risk in the next century. That's why people generally think that tech risks are more likely. I can't find much online but there's this, and you could also search for Carl Shulman and Seth Baum, who might've also covered the issue.

I also find that it's frequently the most helpful to be only a little weird in public, but once you have someone's confidence you can start being significantly more weird with them because they can't just write you off. Most of the best of both worlds.

I'm a physics undergrad who is very interested in quantum computing. Interested to hear thoughts on it from someone who is a rationalist; if you would email me at Connor_Flexman AT brown DOT edu, it would be wildly helpful.

I've heard from several of my friends that EA is frequently introduced to them in a way that seems elitist and moralizing. I was wondering if there was any data on how many people learned about it through which sources. One possibility that came up was running tv/radio/internet ads for it (in a more gentle, non-elitist manner), in the hopes that the outreach and potentially recruited donors would more than pay back the original cost. Thoughts?