Saw a WFP donation ad on Facebook, which reminded me to check the FAO, but it still does not appear to have put out a call for funding.
I've said for years that if there is a referendum / ballot initiative to switch from first-past-the-post to anything else, always say yes. Anything is better than first-past-the-post. Having said that... I have opinions.
First of all, before choosing a voting system you have to know if there will be a single winner or many. Single-winner voting systems should not be used to run multi-winner elections because single-winner district-based voting method cannot produce a fair outcome; gerrymandering is always possible. Also, these methods pretty reliably magnify the power of large parties over small parties, which has always been frustrating to me (because I consistently dislike large parties as well as most smaller parties, and feel like there is no one to represent me). Multi-winner elections should use multi-winner voting systems! Naturally, my favorite systems are the one I designed, Simple Direct Representation, and the one that inspired it, Direct Representation, but since these systems will never happen, I'd recommend good old fashioned Proportional Representation, Mixed-Member Proportional, STV or any other proportional system that seems politically viable. Sometimes at night I dream of a meta-voting system where a country splits its legislature into two voting systems and during every election there's a vote for which one people like better, which adjusts the relative influence of each system, and then ... but never mind, no one would vote for it: it's too democratic.
As for single-winner systems, I rank them clearly in this order:
1. Score voting (a.k.a. range voting, cardinal voting), where each candidate is rated on a scale (e.g. 0 to 5 stars, like the old Netflix. I was puzzled that Netflix killed off star-ratings; it seemed to produce more accurate and meaningful recommendations than the new up/down system. The reason given for the change was not that it didn't work well - the system worked quite well, but people didn't understand it. If it were up to me I'd focus on helping people understand it, rather than scrapping it.)
2. Condorcet methods, e.g. Ranked Pairs, which is based on preferential ballots (candidates in preferential order). A Condorcet method looks at each pair of candidates in isolation, with respect to all the ballots, and elects the candidate that wins a majority of the vote in every pairing against every other candidate. The problem is that there is not always a "condorcet winner". If there are three candidates A, B, and C, it can happen that A beats B, B beats C and C beats A. So a Condorcet system must also specify how to resolve such conflicts. Ignorant people often promote "the preferential ballot" as a voting system, but a preferential ballot is just a ballot, not a voting system. IRV is far more popular than Condorcet, but it seems strictly worse, because IRV has no underlying mathematical basis, and happens to have somewhat unstable behavior and fails the monotonicity criterion. Also, I believe voters should be allowed to rank two candidates as equal (no preference between them) or "no opinion"; Condorcet can support such features, while IRV cannot.
I used to prefer Condorcet, probably because I learned about it first and liked the intuitive idea that "if a candidate is preferred by a majority of voters over all others, that candidate should win." I changed my mind for the following reasons:
1. Range Voting, as well as its simpler cousin Approval Voting, allow the outcome of an election to be measured numerically, which lets voters understand the popularity of candidates, which is relevant in future elections. You can say things like "minor-party candidate M had an an average rating just one point behind the winner" or "the winner of the election had a lower average score than any other in history". Condorcet does not allow this. If somebody comes up with a way to turn Condorcet or IRV results into simple numbers, I think somebody else could come up with a different way to do that, allowing confusing, competing numerical narratives about the results.
2. Score Voting allows more nuance. I can say "I like X a little more than Y, but I like Y a lot more than Z".
3. Score Voting works far better in case the number of candidates is large. If there are 30 candidates, putting them in a single order is fairly impractical and burdensome for the voter unless ties are allowed.
4. Tallying results is easier with Score voting than Condorcet (though not as easy as Approval). Note that with computers we can calculate outcomes with an arbitrarily complex method, but computers can be hacked, so manual counting remains relevant.
5. If you wanted to know how happy people were with the outcome of an election, how might you ask them? "On a scale of 1 to 10, how happy are you with the outcome of the election?" That's asking for a Score! Score voting simply turns this question into ballot form, so that if people answer honestly, it will maximize the average answer to the happiness question! A criticism against Score voting might explain "if you raise the preference of a less-preferred candidate L on your ballot, you could cause L to beat your preferred candidate P who you rated higher". But if this happens because overall satisfaction of all voters is collectively higher, that's a fine outcome. I'm open to hearing about ways the Score voting system could be gamed, but such gaming is only interesting if other systems are not similarly vulnerable to gaming. (Edit: aha, here's the site where I learned about this idea.) The one "game" we can count on is something I'll call "spreading", where we spread out our true opinion on the ballot: if there are 3 candidates and my happiness would be 4/10 if A wins, 5/10 if B wins and 6/10 if C wins, I will spread this out to 0/10 for A, 5/10 for B and 10/10 for C. But every proposed voting system has something analogous to this.
Approval voting is technically a version of Score voting that gathers relatively little information from voters. Its virtues are that it is extremely simple and easy to implement, and I'm persuaded of its value on that basis. I suspect that, statistically, as a result of a large number of voters, Approval won't perform much worse than Score voting in practice. My intuition is this: consider one hundred voters who partially approve of a candidate C; they would like to rank this person as 5/10, but on an Approval ballot they can't. I suspect that roughly half of the people will "approve" of this person, so that overall the results are similar to what Score voting would produce.
P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)
Disagree: as a software engineer, my prior against the simulation hypothesis is extraordinarily low because common sense and the laws of physics indicate convincingly that we don't live in a simulation. (The only plausible exception is if I am the only person in the simulation.)
I like Toby's point—seems like the prior about "one person's influence over the future" should decrease over time, and the point about how a significant fraction of all cognitively modern humans ever are alive today is well taken.
Meanwhile on the topic of "having the prerequisite knowledge necessary to positively impact the long-term future", that quantity has been increasing over time, particularly in the last century, given developments in science, philosophy, rationality etc., and that quantity will certainly increase in the coming centuries provided that civilization survives that long. Therefore, in consideration of how society has neglected X-risks and civilization-destroying risks, this point in time seems very hingey in the sense that we can probably already take actions that predictably and non-negligibly affect cataclysmic risk levels, and these actions may determine whether or not society survives long enough to reach a future time when our cluelessness is reduced, and our knowledge and values are improved.
Something I didn't see mentioned in the above discussion is the idea that hingeyness may be unclear even in hindsight. Certainly before the 19th century there is an argument to be made that one could have little impact on the future unless one was, say, Isaac Newton, and even then one's impact was perhaps just to bring science to people a little earlier than would have happened otherwise. But what's more hingey, the 19th or 20th century? Well, when it comes to X-risks, there was no atomic bomb until after modern physics was discovered in the early 20th century, and therefore no MAD cold war... no risk of superbugs until modern medicine, etc. When it comes to risk against civilization, the 20th century seems more hingey than the 19th, but on other topics (like when the best time to be a scientist or engineer is) it is less obvious.
Certain early choices had a lot of impact. A classic example is the Qwerty keyboard; on the other hand this layout was the choice of just one or two people, a choice that no one else could have influenced—this reminds me of a general problem with the 19th century: opportunities to have an impact were rare, because there was e.g. no government funding for science. Note that a successor keyboard like Dvorak could have been designed by vastly more people, so I wonder if things could have gone differently, e.g. what if someone had gone with the flow like I did with my own keyboard design, would it have sold better? What if it was sold in the 1920s instead of the 1930s? Or consider Esperanto—almost anyone could design a language. I heard that Esperanto was largely forgotten when WWI happened, but what if a commander in the allies knew about it, and observed that troops could communicate better if they had a common language? If we had a common language today, surely the world would be different—it's hard to be sure that it would be better, but today many people have to spend vast amounts of time learning English before they can meaningfully affect the course of history.
So I'd say overall that the 20th century was much more hingey, though it's hard to see how to assign credit—do we credit scientists for what they discovered, politicians for what policies they instituted that created funding for science, public servants for how they moderated new institutions, lawyers for the important cases they argued, activists for helping influence elections that led to policy, engineers for what they created, or companies that funded engineers? And what if communist China ultimately has the greatest impact, either by precipitating another world war, or by overturning democracy and free speech in favor of a authoritarian global regime in which the definition of truth can be chosen by the leadership?
So generally I think the knowledge we gather in the future will be crucial for our long-term future, but the things we do today will lay the foundation for that future, and perhaps this is the best thing to focus on: laying down a good foundation.
Each of us can contribute in our own way. As a software engineering veteran, I hope to contribute by designing foundational software, which could potentially act as an accelerator that brings benefits of the future more quickly to the present (my impact is no doubt eclipsed, however, by Steve Krause of Future of Coding who succeeded, where I failed, in building a community, or by Bret Victor who inspired countless people). If you work in medicine you might work on containing the risk of superbugs; if in politics you there are any number of causes that might help build a stable and prosperous world... we may be clueless now, but there are things we know, like: stability and prosperity good, war and catastrophe bad. And while rationalism is in its infancy, I think we have enough epistemological tools to point us in the right directions (my life might have gone quite differently if I had discovered rationalism and EA and left my religion fifteen years earlier!)
In any case, I'm not sure why we should be concerned with how hingey this century is—at least it's probably more hingey than the last century, and in any case we have to play the hand you're dealt. We are clueless about a great many things, but not about everything, suggesting a two-pronged course of action: first to work on reducing cluelessness (and figuring out how to act in the face of cluelessness), and second to help the future in ways we can understand, such as by reducing catastrophic risks.
When a post like this on cause prioritization appears on EA forums, I expect it to go something like this: "the marginal costs of the best interventions are lower in category A than category B, under the following assumptions (though if we tweak the assumptions in a certain way, B looks better)... so it's likely that we should fund those best interventions in category A rather than B."
This post, however, doesn't appear to be based on the current EA thinking about the marginally-best interventions for climate change or global development. On climate, the last thing I heard was that clean energy R&D is most sorely needed... which agreed with my prior that molten salt reactors are awesome, scalable and sorely needed, plus the potential of things like enhanced geothermal energy to provide a transition path for the oil & gas industry, and then there's alternative fusion technologies like DPF and EMC2's Polywell (whose value to society is either zero or astronomical). Yet I don't see these on the chart of interventions. I don't recall what the best available interventions for global development are thought to be, but I thought I heard someone say that cash transfers likely weren't it.
And although we should be skeptical of "free lunch" interventions with negative costs, that doesn't mean they aren't real. Some of them could be great investments, and if so, should we not identify which ones are realistically negative-cost and invest in them?
(I only read the first half; sorry if I missed something important.)
I'm strongly inclined to support this, but the abstract doesn't say what the money would be spent on, or explain how this can lead to more spending on previously neglected R&D. Care to comment before I read the entire document?
Also, the very first graph says "CO2 emissions by region in the NPS", but what's the NPS?
Also, what is your relationship to the stated authors Hart & Cunliff [edit: I see they are not the authors, rather they are evaluated by the document], and how does Bill Gates fit in?
Well, I've worked on "non-EA projects" and I've "accrued career capital" (in the software industry) but I don't think I could just flip a switch and start working on EA projects with other EA people. At present it's easier to get into the EA hotel than to get a grant from an EA org, which in turn is probably easier than getting a job at an EA org. And note that if I got a grant I would still be isolated from other EAs as I don't live near an EA hub; EA hotel solves the "loneliness" problem.
Of note: this newer post argues persuasively for the hotel in a different way than OP or me.
Even addressed problems can be addressed inefficiently
This is a good generalization to make from the climate change post last month. I argued in a comment that while climate action is well-funded as a category, I knew of a specific intervention that seems important, easily tractable & neglected. We can probably find similar niches in other well-funded areas.
I was increasingly seeing a movement in Foundation World towards better frameworks around understanding and reporting on net impact. While EA takes this idea to an extreme I didn't understand why this community needed to be so removed from the conversations (and access to capital) that were simultaneously happening in other parts of the social sector.
I suppose EA grew from a different group of people with a different mindset than traditional charities, so I wouldn't expect connections to exist between EA and other nonprofits (assuming this is a synonym for "the social sector") until people step forward to create connections. Might we need liasons focused full time on bridging this gap?
At the beginning of the curve, down and to the left, we see that there is a smaller amount of capital circulating through approaches that aren't that effective.
On the far left, interventions can have negative value.
Thanks. Today I saw somebody point to Peter Singer and Toby Ord as the origin of EA, so I Googled around. I found that the term itself was chosen by 80000 hours and GWWC in 2012.
In turn, GWWC was founded by Toby Ord and William MacAskill (both at Oxford), and 80,000 hours was founded by William MacAskill and Benjamin Todd.
(incidentally, though, Eliezer Yudkowsky had used "effective" as an adjective on "altruist" back in 2007 and someone called Anand had made a "EffectiveAltruism" page in 2003 on the SL4 wiki; note that Yudkowsky started SL4 and LessWrong, and with Robin Hanson et al started OvercomingBias.)
I thought surely there was some further connection between EA and LessWrong/rationalism (otherwise where did my belief come from?) so I looked further. This history of LessWrong page lists EA as a "prominent idea" to have grown out of LessWrong but offers no explanation or evidence. LessWrong doesn't seem to publish the join-date of its members but it seems to report that the earliest posts of "wdmacaskill" and "Benjamin_Todd" are "7y" ago (the "Load More" command has no effect beyond that date), while "Toby_Ord" goes back "10y" (so roughly 2009). From his messages I can see that Toby was also a member of Overcoming Bias. So Toby's thinking would have been influenced by LessWrong/Yudkowskian rationalism, while for the others the connection isn't clear.
that they should be a less painful way
as the least method of killing
It's a risk, to be sure, that the aggregate suffering of insects would exceed the same weight in cattle; however it's probably uncommon to expect that so I, like nshepperd, am curious where your expectation comes from. (which reminds me, I'm sure glad somebody has ideas about how to do consciousness research - I couldn't possibly!)