In general, I'm a big fan of approaches that are optimized around Value of Information. Given EA/longtermism's rapidly growing resources (people and $), I expect that acquiring information to make use of resources in the future is a particularly high EV use of resources today.
Congrats!
I think part of this is about EAs recalibrating what is "crazy" within the community. In general, I think the right assumption is that if you want $ to do basically anything, there's a good chance (honestly >50%) you can get it.
If you don't want someone to do something, makes sense not to offer a large amount of $. For the second case, I'm a bit confused by this statement:
"the uncertainty of what the people would do was the key cause in giving a relatively small amount of money"
What do you mean here? That you were uncertain in which path was best?
Very interesting, valuable, and thorough overview!
I notice you mentioned providing grants of 30k and 16k that were or are likely to be turned down. Do you think this might have been due to the amounts of funding? Might levels of funding an order of magnitude higher have caused a change in preferences?
Given the amount of funding in longtermist EA, if a project is valuable, I wonder if amounts closer to that level might be warranted. Obviously the project only had 300k in funding, so that level of funding might not have been practical here. However, from the perspective of EA longtermist funding as a whole, routinely giving away this level of funding for projects would be practical.
I work in Democratic data analytics in the US and I agree that there's potentially a lot of value to EAs getting involved in the partisan side rather than just the civil service side to advance EA causes. If anyone is interested in becoming more involved in US politics, I'd love to talk to them. You can shoot me a message.
Hey; I work in US politics (in Data Analytics for the Democratic Party). Would love to chat if you think it would be useful for you.
Yes. People aren't spending much money yet because people will mostly forget about it by the election.
Independent of the desirability of spending resources on Andrew Yang's campaign, it's worth mentioning that this overstates the gains to Steyer. Steyer is running ads with little competition (which makes ad effects stronger), but the reason there is little competition is because decay effects are large; voters will forget about the ads and see new messaging over time. Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.
I'd be curious which initiatives CSER staff think would have the largest impact in expectation. The UNAIRO proposal in particular looks useful to me for making AI research less of an arms race and spreading values between countries, while being potentially tractable in the near term.
There's also other counterfactual matching opportunities that tend to arise around the same time though.
Yeah, I don't think filling the finite universe we know about is where the the highest expected value is. It's likely some form of possible infinite value, since it's not implausible that this could exist. But ultimately, I agree that the implications of this are minor and our response should basically be the same as if we lived in a finite universe (keep humanity alive, move values towards total hedonic utilitarianism, and build safe AI).
I'm not arguing for arguing for false arguments; I'm just saying that if you have a point you can make around racial bias, you should make that argument, even if it's not an important point for EAs, because it is an important one for the audience.
I think this is rather weak and mostly arguing against a straw-man. I don't see Effective Altruists arguing that you should refrain from investments in your human capital. It makes sense to cut down on consumption (eg. eat out less). But I don't know of any EAs arguing that you should refrain from say buying books.
In general, I'm glad that it was included because it ads legitimacy to the overall argument with Vox's center-left audience.
I strongly prefer building legitimacy with true arguments (I also expect trying to be rigorous and only saying true things will build better long-term legitimacy, though I think I would advocate for being truthful even without that)
I found this really helpful, and gave me what I expect to be actionable information I can use in my own work (I work in Democratic politics). Much appreciated!
I agree that limitations on RCTs are a reason to devalue them relative to other methodologies. They still add value over our priors, but I think the best use cases for RCTs are when they're cheap and can be done at scale (Eg. in the context of online surveys) or when you are randomizing an expensive intervention that would be provided anyway such that the relative cost of the RCT is cheap.
When costs of RCTs are large, I think there's reason to favor other methodologies, such as regression discontinuity designs, which have faired quite well compared to RCTs (https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.22051).
FYI, I'm pretty busy over the next few days, but I'd like to get back to this conversation at one point. If I do, it may be a bit though.
To your first comment, I disagree. I think it's the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?
I think there is more total pain. I'm not counting the # of headaches. I'm talking about the total amount of pain.
Can you define S1?
We may not, as these discussions tend to go. I'm fine calling it.
I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here's my position on the...
Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject.
That's what I'm interested in a definition of. What makes it a "single subject"? How is this a binary term?
I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences)...
1) I'd like to know what your definition of "subject-of-experience" is.
2) For this to be true, I believe you would need to posit something about "conscious experience" that is entirely different than everything else in the universe. If say factory A produces 15 widgets, factory B produces 20 widgets, and Factory C produces 15 widgets, I believe we'd agree that the number of widgets in A+C is greater than the number of widgets produced by B, no matter how independent the factories are. Do you disagree with this?
Similarly, I'd say if 15 n...
I'd say I'm making two arguments:
1) There is no distinct personal identity; rather it's a continuum. The you today is different than the you yesterday. The you today is also different from the me today. These differences are matters of degree. I don't think there is clearly a "subject of experience" that exists across time. There are too many cases (eg. brain injuries that change personality) that the single consciousness theory can't account for.
2) Even if I agreed that there was a distinct difference in kind that represented a consistent person...
It's the same 5 headaches. It doesn't matter if you're imagining one person going through it on five days or imagine five different people going through it on one day. You can still imagine 5 headaches. You can imagine what it would be like to say live the lives of 5 different people for one day with and without a minor headache. Just as you can imagine living the life of one person for 5 days with and without a headache. The connection to an individual is arbitrary and unnecessary.
Now this goes into the meaningless of personhood as a concept, but what wou...
I think this is confusing means of estimation with actual utils. You can estimate that 5 headaches are worse than one by asking someone to compare five headaches vs. one. You could also produce an estimate by just asking someone who has received one small headache and one large headache whether they would rather receive 5 more small headaches or one more large headache. But there's no reason you can't apply these estimates more broadly. There's real pain behind the estimates that can be added up.
If a small headache is worth 2 points of disutility and a large headache is worth 5, the total amount of pain is worse because 2*5>5. It's a pretty straightforward total utilitarian interpretation.I find it irrelevant whether there's one person who's worse off; the total amount of pain is larger.
I'll also note that I find the concept of personhood to be incoherent in itself, so it really shouldn't matter at all whether it's the same "person". But while I think an incoherent personhood concept is sufficient for saying there's no difference if it's spread out over 5 people, I don't think it's necessary. Simple total utilitarianism gets you there.
Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache
I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.
On this topic, I similarly do still believe there’s a higher likelihood of creating hedonium; I just have more skepticism about it than I think is often assumed by EAs.
This is the main reason I think the far future is high EV. I think we should be focusing on p(Hedonium) and p(Delorium) more than anything else. I'm skeptical that, from a hedonistic utilitarian perspective, byproducts of civilization could come close to matching the expected value from deliberately tiling the universe (potentially multiverse) with consciousness optimized for pleasure or pain. If p(H)>p(D), the future of humanity is very likely positive EV.
In most cases, I expect interventions to impact policy to also have diminishing marginal returns. Eg An experiment on legislative contacts found little increased effect with more calls (https://link.springer.com/article/10.1007/s11109-014-9277-1) .
(Global catastrophic risks: Fund CEPI, the Coalition for Epidemic Preparedness Innovations.) looks interesting. It looks like they have a goal of raising 1B dollars (http://www.sabin.org/updates/blog/cepi-new-approach-epidemic-preparedness). My impression is that they are likely to meet this, but I may be mistaken. Would additional funding to CEPI likely be counterfactual?
Sure, this material is most important for EAs. However, it could be used to raise funding from EAs that would then be used to secure even more funding from the public sector in a way that's more difficult for AI safety.
Really exciting work! This seems like an intervention that could potentially be funded with public resources more easily than AI safety research could, which opens up another avenue to funding.
I see how this could be very useful in the event of a nuclear war, but I do have some skepticism about how useful these alternative foods wold be for a less severe shortage. With a 10% reduction in agricultural productivity, why do you think alternative foods that don't need sunlight could be cheaper than simply expanding how much of useable land we devote to agriculture/using land to grow products that are cheaper per calorie?
As a quick update, I also tried something similar on the EA survey to see whether making certain EA considerations salient would impact people's donation plans. The end result was essentially no effect. Obligation, Opportunity, and emphasizing cost benefit studies on happiness all had slightly negative treatment effects compared to the control group. The dependent variable was how much EA survey takers reported planning to donate in the future.
It might make a lot of sense to test the risk vs. accidents framing on the next survey of AI researchers.
I disagree. I believe good ballot measure polling should more accurately reflect the actual language that would appear on the ballot. There's a known bias towards voters being more likely to support simpler language.
Unless this is an extremely expensive measure (which is probably won't be), I don't think that assumption is correct. Most voters will probably never hear about the initiative before they see it on the ballot/will have seen a cursory ad that they barely paid attention to.
Cool; had missed that row. Yeah, if it polls, 70% the chance of passage might be close to 80%. Conditional upon that level of support, your estimate seems reasonable to me (assuming the ballot summary language would not be far more complex than the polled language).
Yeah, I agree that it being an effective treatment is a necessary precursor to it being a good ballot law to pass by ballot initiative and part of the EV calculation for spending money on the ballot measure itself.
That seems similar to Milan_Griffes' approach. However, when we're comparing ballot measures to other opportunities, I think the relevant cost to EA would be the cost to launch the campaign. That's what EAs would actually be spending money on and what could be spent on other interventions.
We don't have to assume away the additional costs of getting the medicine, but that can be factored into the benefit (ie. the net benefit is the gains they would get from the medicine - the gains they lose from giving up the funds to purchase the drugs)
Hey; I made some comments on this on the doc, but I thought it was worth bringing them to the main thread and expanding.
First of all, I'm really happy to sea other EAs looking at ballot measures. They're a potentially very high EV method of passing policy/raising funding. They're particularly high value per dollar when spending on advertising is limited/nothing since the increased probability of passage from getting a relatively popular measure on the ballot is far more than the increased probability from spending the same amount advertising for it.
Also, a...
Thanks!
I adapted that framing from Will MacAskill (example of this starting 12:45 in the podcast with Sam Harris here: https://www.samharris.org/podcast/item/being-good-and-doing-good). MacAskill refers to the framing as "Excited Altruism" It might come across as better when he tells it than in a web survey. But I think it's pretty similar. I grouped this in with "opportunity", which I've also seen called "exciting opportunity" in the ea community (http://lukemuehlhauser.com/effective-altruism-as-opportunity-or-obligation/).
Bu...
Yeah, the survey was a lot longer. Typically general public surveys will cost over 10 dollars a complete, so getting 1200 cases for a survey like this can cost thousands of dollars.
I agree that model specification can be tricky, which is a reason I felt it well worth it to use the proprietary software I had access to that has been thoroughly vetted and code reviewed and is used frequently to run similar analyses rather than trying to construct my own.
I did not make sure people read the paragraph. I discussed the issue a bit in my discussion section, but on...
No; I did not fit multiple models. Lasso regression was used to fit a propensity model using the predictors.
Using bachelor's vs. non-bachelor's has advantages in interpretability, so I think this was the right move for my purposes.
I did not spend an exorbitant amount of time investigating diagnostics, for the same reason I used a proprietary package was has been built for running these tests at a production level and has been thoroughly code reviewed. I don't think it's worth the time to construct an overly customized analysis.
Sure, in an ideal world, software would all be free for everyone; alas, we do not live in such a world :p. I used the proprietary package because it did exactly what I needed and doesn't require writing STAN code or anything myself. I'd rather not re-invent the wheel. I felt the tradeoff of transparency for efficiency and confidence in its accuracy was worth it, especially since I wouldn't be able to share the data either way (such are the costs of getting these questions on a 1200 person survey without paying a substantial amount).
But the basic model was just a multilevel binomial model predicting the dependent variable using the treatments and questions asked earlier in the survey as controls.
Unfortunately, because I used proprietary survey data/a proprietary R package to run this analysis, I don't think I'll be able to share the data and code.
Yup, binomial.
The respondents in a treatment were each shown a message and asked how compelling they thought it was. The control was shown no message.
Yeah; the plots are the predicted values for those given a particular treatment. and Average Treatment Effect is the difference with the control.
I did not include every control used in the provided questionnaire. There were a mix of demographics/attitudinal/behavioral questions asked in the survey that I also used. These controls, particularly previous donations, were important for decreasing variance.
I use...
I agree that the modal outcome of a Trump presidency is that he changes little and the Democrats come out stronger at the end of his presidency than they entered. However, I still think it would have been better that Clinton had won (even if we assume the same congress).
The most important reason is tail risk. As others have commented, the risk of nuclear war may be greater under Trump than it would have been under Clinton. So far, he seems to be pursuing a more conventional foreign policy than I feared, but I still believe the risk is higher than with Clin...
Thanks for the write up. I think you make a compelling case that this is more effective than canvassing, which can be over 1000 dollars for votes at the margin in a competitive election like 2016. I do think there are a few ways your estimate may be an overestimate though.
Of those who claimed they would follow through with vote trading, some may not have. You mention that there wouldn't have been much value to defecting. However, much of the value of a vote for individual comes from tribal loyalties rather than affecting the outcome. That's why turnout is...
This sounds really great to me. I love the idea of having more RCTs in the EA sphere. I would definitely record how much they are giving 1 year later.
I also think it's worth having a hold out set. People can pre-register the list of friends, than a random number generator can be used to randomly selects some friends not to make an explicit GWWC pitch to. It's possible many of the friends/contacts who join GWWC and start donating are those who have already been exposed to EA ideas before over a long period of time, and the effect size of the direct GWWC pi...
You can't look at aggregate turnout numbers being different and assume the composition of turnout was different. You're making the assumption that there was 0 movement from Obama to Trump or from Romney to Clinton; both of which are definitely incorrect as evidenced by polling.
Secondly, turnout is much higher than that appears; much more will come in from California, Washington, Oregon and Colorado. It always takes these states forever to report. So the turnout numbers now are misleading.
At most, campaign funds would have moved this a point or two. Campaign funding has little impact on presidential elections; Clinton far outspent Trump and Trump was far outspent in the primary election. If we assume an effective size of 5% for all of Trump's money and assume no diminishing marginal return (both very generous assumptions), that 0.15% is 0.0075 percentage points in movement. The outcome was decided by 1, so that's over two orders of magnitude lower than what was needed under generous assumptions. It was probably more orders of magnitude lowe...
Thiel had essentially nothing to do with the outcome of this election.
This was not primarily a turnout issue. Black turnout was down, but Hispanic turnout was up. White turnout appears relatively flat (both Democratic and Republican white turnout), but we'll know more when actual person level vote history is released. Regardless, EA messaging is not the right way to appeal to Berners.
The easiest way to shift the outcome of the election would have been to change public opinion by a point or two by shifting the narrative of the race in the final week. Comey was successful at doing this.
"My view is that - for the most part - people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it."
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it's harder for defection to make sense.
Broadly, I think that w... (read more)