I agree that the correlation between number of upvotes on EA forum and LW posts/comments and impact isn't very strong. (My sense is that it's somewhere between weak and strong, but not very weak or very strong.) I also agree that most of the reasons you list are relevant.
But how I'd frame this is that - for example - a post being more accessible increases the post's expected upvotes even more than it increases its expected impact. I wouldn't say "Posts that are more accessible get more upvotes, therefore the correlation is weak", because I think increased accessibility will indeed increase a post's impact (holding other factor's constant).
Same goes for many of the other factors you list.
E.g., more sharing tends to both increase a post's impact (more readers means more opportunity to positively influence people) and signal that the post would have a positive impact on each reader (as that is one factor - among many - in whether people share things). So the mere fact that sharing probably tends to increase upvotes to some extent doesn't necessarily weaken the correlation between upvotes and impact. (Though I'd guess that sharing does increase upvotes more than it increases/signals impact, so this comment is more like a nitpick than a very substantive disagreement.)
To make it clear, the claim is that the number karma for a forum post on a project does not correlate well with the project's direct impact? Rather than, say, that a karma score of a post correlates well with the impact of the post itself on the community?
I'd say it also doesn't correlate that well with its total (direct+indirect) impact either, but yes. And I was thinking more in contrast to the karma score being an ideal measure of total impact; I don't have thoughts to share here on the impact of the post itself on the community.
Thanks, that makes sense.
I think that for me, I upvote according to how much I think a post itself is valuable for me or for the community as a whole. At least, that's what I'm trying to do when I'm thinking about it logically.
Epistemic status: Experiment. Somewhat parochial.
I was looking at things other people had tried before.
How should we run the EA Forum Prize?
Cause-specific Effectiveness Prize (Project Plan)
Announcing Li Wenliang Prize for forecasting the COVID-19 outbreak
Announcing the Bentham Prize
$100 Prize to Best Argument Against Donating to the EA Hotel
Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission)
Cash prizes for the best arguments against psychedelics being an EA cause area
Debrief: "cash prizes for the best arguments against psychedelics"
A black swan energy prize
AI alignment prize winners and next round
$500 prize for anybody who can change our current top choice of intervention
The Most Good - promotional prizes for EA chapters from Peter Singer, CEA, and 80,000 Hours
Over $1,000,000 in prizes for COVID-19 work from Emergent Ventures
The Dualist Predict-O-Matic ($100 prize)
Seeking suggestions for EA cash-prize contest
Announcement: AI alignment prize round 4 winners
A Gwern comment on the Prize literature
[prize] new contest for Spaced Repetition literature review ($365+)
[Prize] Essay Contest: Cryonics and Effective Altruism
Announcing the Quantified Health Prize
Oops Prize update
Some thoughts on: https://groups.google.com/g/lw-public-goods-team
AI Alignment Prize: Round 2 due March 31, 2018
Quantified Health Prize results announced
FLI awards prize to Arkhipov’s relatives
Progress and Prizes in AI Alignment
Prize for probable problems
Prize for the best introduction to the LessWrong source ($250)
Go to the EA forum API or to the LW API and input the following query:
# view: "top"
meta: null # this seems to get both meta and non-meta posts
before: "10-11-2020" # or some date in the future
Copy the output into a last5000posts.txt
Search for the keyword "prize". In Linux one can use this with grep "prize" last5000posts.txt, or with grep -B 1 "prize" last5000posts.txt | sed 's/^.*: //' | sed 's/\"//g' > last500postsClean.txt to produce a cleaner output.
grep "prize" last5000posts.txt
grep -B 1 "prize" last5000posts.txt | sed 's/^.*: //' | sed 's/\"//g' > last500postsClean.txt
Can't believe I forgot the D-Prize, which awards $20,000 USD for teams to distribute proven poverty interventions.
The Stanford Social Innovation Review makes the case (archive link) that new, promising interventions are almost never scaled up by already established, big NGOs.
I suppose I just assumed that scale ups happened regularly at big NGOs and I never bothered to look closely enough to notice that it didn't. I find this very surprising.
Taken from here, but I want to be able to refer to the idea by itself.
This spans six orders of magnitude (1 to 1,000,000 mQ), but I do find that my intuitions agree with the relative values, i.e., I would probably sacrifice each example for 10 equivalents of the preceding type (and vice-versa).
A unit — even if it is arbitrary or ad-hoc — makes relative comparison easier, because projects can be compared to a reference point, rather than between each other.. It also makes working with different orders of magnitude easier: instead of asking how valuable a blog post is compared to a foundational paper, one can move up and down in steps of 10x, which seems much more manageable.
The Good Judgement Open forecasting tournament gives a 66% chance for the answer to "Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020?"
I think that the 66% is a slight overestimate. But nonetheless, if a famine does hit, it would be terrible, as other countries might not be able to spare enough attention due to the current pandemic.
It is not clear to me what an altruist who realizes that can do, as an individual:
Donating to the World Food Programme, which is already doing work on the matter, might be a promising answer, but I haven't evaluated the programe, nor compared it to other potentially promising options (see here: https://forum.effectivealtruism.org/posts/wpaZRoLFJy8DynwQN/the-best-places-to-donate-for-covid-19, or https://www.againstmalaria.com/)
Did you mean to post this using the Markdown editor? Currently, the formatting looks a bit odd from a reader's perspective.
Ethiopia's Tigray region has seen famine before: why it could happen again - The Conversation Africa
Tue, 17 Nov 2020 13:38:00 GMT
The Tigray region is now seeing armed conflict. I'm at 5-10%+ that it develops into famine (regardless of whether it ends up meeting the rather stringent UN conditions for the term to be used) (but have yet to actually look into the base rate). I've sent an email to FEWs.net to see if they update their forecasts.
Excerpt from "Chapter 7: Safeguarding Humanity" of Toby Ord's The Precipice, copied here for later reference. h/t Michael A.
Many of those who have written about the risks of human extinction suggest that if we could just survive long enough to spread out through space, we would be safe—that we currently have all of our eggs in one basket, but if we became an interplanetary species, this period of vulnerability would end. Is this right? Would settling other planets bring us existential security?
The idea is based on an important statistical truth. If there were a growing number of locations which all need to be destroyed for humanity to fail, and if the chance of each suffering a catastrophe is independent of whether the others do too, then there is a good chance humanity could survive indefinitely.
But unfortunately, this argument only applies to risks that are statistically independent. Many risks, such as disease, war, tyranny and permanently locking in bad values are correlated across different planets: if they affect one, they are somewhat more likely to affect the others too. A few risks, such as unaligned AGI and vacuum collapse, are almost completely correlated: if they affect one planet, they will likely affect all. And presumably some of the as-yet-undiscovered risks will also be correlated between our settlements.
Space settlement is thus helpful for achieving existential security (by eliminating the uncorrelated risks) but it is by no means sufficient. Becoming a multi-planetary species is an inspirational project—and may be a necessary step in achieving humanity’s potential. But we still need to address the problem of existential risk head-on, by choosing to make safeguarding our longterm potential one of our central priorities.
Nitpick: I would have written "this argument only applies to risks that are statistically independent" as "this argument applies to a lesser degree if the risks are not statistically independent, and proportional to their degree of correlation." Space colonization still buys you some risk protection if the risks are not statistically independent but imperfectly correlated. For example, another planet definitely buys you at least some protection from absolute tyranny (even if tyranny in one place is correlated with tyranny elsewhere.)
Here is a more cleaned up — yet still very experimental — version of a rubric I'm using for the value of research:
See also: Charity Entrepreneurship's rubric, geared towards choosing which charity to start.
I like it! I think that something in this vein could potentially be very useful. Can you expand more about the proxies of impact?
Sure. So I'm thinking that for impact, you'd have sort of causal factors (Scale, importance, relation to other work, etc.) But then you'd also have proxies of impact, things that you intuit correlate well with having an impact even if the relationship isn't causal. For example, having lots of comments praising some project doesn't normally cause the project to have more impact. See here for the kind of thing I'm going for.
If one takes Toby Ord's x-risk estimates (from here), but adds some uncertainty, one gets: this Guesstimate. X-risk ranges from 0.1 to 0.3, with a point estimate of 0.19, or 1 in 5 (vs 1 in 6 in the book).
I personally would add more probability to unforeseen natural risk and unforeseen anthropocentric risk
The uncertainty regarding AI risk is driving most of the overall uncertainty.
2020 U.S. Presidential election to be most expensive in history, expected to cost $14 billion - The Hindu
Thu, 29 Oct 2020 03:17:43 GMT