I completed a PhD in the statistics of brain imaging, and now work as a data scientist in industry.
One of the topics I hope to return to here is the importance of histograms. They're not a universal solvent. However they are easily accessible without background knowledge. And as a summary of results, they require fewer parametric assumptions.
I very much agree about the reporting of means and standard deviations, and how much a paper can sweep under the rug by that method.
Nice example, I see where you're going with that.
I share the intuition that the second case would be easier to get people motivated for, as it represents more of a confirmed loss.
However, as your example shows actually the first case could lead to an 'in it together' effect on co-ordination. Assuming the information is taken seriously. Which is hard as, in advance, this kind of situation could encourage a 'roll the dice' mentality.
I also think it would be a lot more helpful to walk through how this mistake could happen in some real scenarios in the context of EA
Hopefully, we'll get there! It'll be mostly Bayesian though :)
Thanks - that last link was one I'd come across and liked when looking for previous coverage. My sole previous blog post was about Pascal's Wager. I'd found though when speaking about it that I was assuming too much for some of the audience I wanted to bring along; notwithstanding my sloppy writing :D So, I'm going to attempt to stay focused and incremental.
As long as the core focuses on unusual priorities – which using neglectedness as a heuristic for prioritization will mean is likely – there’s a risk that new members get surprised when they find out about these unusual priorities
Perhaps there are also some good reasons that people with different life experience both a) don't make it to 'core' and b) prioritize more near term issues.
There's an assumption here that weirdness alone is off-putting. But, for example, technologists are used to seeing weird startup ideas and considering the contents.
This suggests a next thing to find out is: who disengages and why.
TL;DR's for the EA Forum/Welcome: ”Effective altruists are trying to figure out how to build a more effective AI, using paperclips, but we're not really sure how it's possible to do so.
Perhaps EA's roots in philosophy lead it more readily to this failure mode?
Take the diminishing marginal returns framework above. Total benefit is not likely to be a function of a single variable 'invested resources'. If we break 'invested resources' out into constituent parts we'll hit the buffers OP identifies.
Breaking into constituent parts would mean envisaging the scenario in which the intervention was effective and adding up the concrete things one spent money on to get there: does it need new PhDs minted? There's a related operational analysis about time lines: how many years for the message to sink in?
Also, for concrete functions, it is entirely possible that the sigmoid curve is almost flat up to an extraordinarily large total investment (and regardless of any subsequent heights it may reach). This is related to why ReLU functions are popular in neural networks: because zero gradients prevent learning.
I would spend every penny unblocking the pathway to a vaccine.
The basic ideas and test candidates are already known. The lag between now and mass roll out is therefore (mostly) dependent on our organizational skills.
<waving hands> UK GDP is ~£2.9 trillion. The recession will shave at least 10% off that. The government takes ~30% of GDP in tax. If bringing forward mass vaccination could shave a quarter off an 18 month recession, it would be revenue neutral to pay £100 billion to do it. So, if some of the above sounds too expensive, it's because a bigger budget is necessary and likely justified. It would take an order 1000x correction to change this reasoning. </waving hands>
"Our actions have dominating long-term effects that we cannot ignore."
To me, this is a strange intuition. Most actions by most people most of the time disappear like ripples in a stream.
If this were not the case, reality would tear under the weight of schemes past people had for the present. Perhaps it is actually hard to change the course of history?
This is a nice piece of accessible scholarship. It would perhaps benefit from an explicit note on why the question is interesting in this context and to this audience.