All of james.lucassen's Comments + Replies

Hm. I think  I agree with the point you're making, but not the language it's expressed in? I notice that your suggestion is a change in endorsed moral principles, but you make an instrumental argument, not a moral one. To me, the core of the issue is here: 

If EA becomes a very big movement, I predict that individuals on the fringes of the movement will commit theft with the goal to donate more to charity and violence against individuals and organisations who pose x-risks.

This seems to me more of a matter of high-fidelity communication than a matt... (read more)

if the AI is scheming against us, reading those posts won’t be very helpful to it, because those ideas have evidently already failed.

Pulling this sentence out for emphasis because it seems like the crux to me.

2
Peter S. Park
1y
Thanks so much for pointing this out, James! I must have missed it, and it is indeed an important crux. One threat model that implies a higher probability of failure rather than an unchanged probability of failure is that goal preservation of an agentic AI against a given SGD-based plan may be strictly easier with prior knowledge of what that plan is. If true, then the fixation probability of a misaligned AGI that can successfully preserve its misaligned goal could increase. A more general point is that this situation-specific analysis (of which AI safety plans could lose their value by being posted on the Internet, and which don't lose value or lose less) is difficult to do a priori . Reforming AI safety research norms to be more broadly pro-security-mindset might capture most of the benefits, even if it's a blunt instrument.

saving money while searching for the maximum seems bad

In the sense of "maximizing" you're using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc. 

However, I think the sense of "maximizing" used in the post you're responding to, and more broadly in EA when people talk about "maximizing ethics", is quite different. I understand it to mean something more like "doing the most g... (read more)

5
Davidmanheim
2y
I think you're right that there are two meanings, and I'm primarily pointing to the failures on the more obviously bad level. But your view - that no given level is good enough, and we need to do marginally more - is still not equivalent to the maximizing view that I see and worry about.  The view I'm talking about is an imperative to only do the best thing, not to do lesser good things. And I think that the conception of binary effectiveness usually leads to the failure modes I pointed out. Unless and until the first half of Will's Effective Altruism is complete - an impossible goal, in my view - we need to ensure that we're doing more good at each step, not trying to ensure we do the most good, and nothing less.

Like the idea of having the place in the name, but I think we can keep that while also making the name cool/fun? 

Personally I wouldn't be opposed to calling EA spaces "constellations" in general, and just calling this one the "Harvard Constellation" or something. This is mostly because I think Constellation is an extraordinarily good name - it's when a bunch of stars get together to create something bigger that'll shine light into the darkness :)

Alternatively, "Harvard Hub" is both easy and very punchy.

I'm broadly on board with the points made here, but I would prefer to frame this as an addition to the pitch playbook, not a tweak to "the pitch".

Different people do need to hear different things. Some people probably do have the intuition that we should care about future people, and would react negatively to something like MacAskill's bottle example. But personally, I find that lots of people do react to longtermism with something like "why worry about the future when there are so many problems now?", and I think the bottle example might be a helpful intuition pump for those people.

The more I think about EA pitches the more I wonder if anyone has just done focus group testing or something...

1
TheOtherHannah
2y
Agree :)

yup sounds like we're on the same page - I think  I steelmanned a little too hard. I agree that the people making these criticisms probably do in fact think that being shot by robots or something would be bad.

I propose we Taboo the phrase "most important", and agree that it's quite vague. The claim I read Karnofsky as making, phrased more precisely, is something like:

In approximately this century, it seems likely that humanity will be exposed to a high level of X-risk, while also developing technology capable of eliminating almost all known X-risks.

This is the Precipice view of things - we're in a brief dangerous bottleneck, after which it seems like things will be much safer. I agree it takes a leap to forecast that no further X-risks will arise in the trillio... (read more)

tldr: I think this argument is in danger of begging the question, and rejecting criticisms that implicitly just say "EA isn't that important" by asserting "EA is important!"

There’s an analogy I think is instructive here

I think the fireman analogy is really fun, but I do have a problem with it. The analogy is built around the "fire = EA cause areas" analogy, and gets almost all of its mileage out of the implicit assumption that fires are important and need to be put out. 

This is why the first class of critics in the analogy look reasonable, and the sec... (read more)

6
Nick Whitaker
2y
So I definitely think I'm making a rhetorical argument there to make a point.  But I don't think the problem is quite as bad as you imply here: I'm mostly using fire to mean "existential risks in our lifetimes," and I don't think almost any EA critic (save a few) think that would be fine. Maybe I should've stated it more clearly, but this is something most ethical systems (and common sense ethics) seem do agree upon. So my point is that critics do agree dying of an existential risk would be bad, but are unfortunately falling into a bit of parochial discourse rather than thinking about how they can build bridges to solve this. To the people who are actually fine with dying from x-risk ("fires aren't as important as you think they are"), I agree my argument has no force, but I just hope that they are clear about their beliefs, as you say. 

Agree  that the impactfulness of working on better government is an important claim, and one you don't provide much evidence for. In the interest of avoiding an asymmetric burden of proof, I want to note that I personally don't have strong evidence against this claim either. I would love to see it further investigated and/or tried out more.

1
mlsbt
2y
I don’t think asymmetric burden of proof applies when one side is making a positive claim against the current weight of evidence. But I fully agree that more research would be worthwhile.

All else equal I definitely like the idea to popularize some sort of longtermist sentiment. I'm still unsure about the usefulness - I have some doubts about the paths to impact proposed. Personally, I think that a world with a mass-appeal version of longtermism would be a lot more pleasant for me to live in, but not necessarily much better off on the metrics that matter.

  • Climate is a very democratically legitimate issue. It's discussed all the time, lots of people are very passionate about it, and it can probably move some pretty hefty voting blocs. I think
... (read more)
1
Jonathan Rystrom
2y
Thank you for your excellent points, James! Before responding to your points in turn, I do agree that a significant part of the appeal of my proposal is to make it nicer for EAs. Whether that is worth investing in is not clear to me either - there are definitely more cost-effective ways of doing that. Now to your points: * I think I define democratic legitimacy slightly differently. Instead of viewing it as putting pressure on politicians through them knowing that everyone cares about the long term, I see it moving long-term policies within the Overton window so to speak by making it legitimate. Thus, it acts as a multiplier for EA policy work. * Wrt talent pool, I think it depends on how tractable it is to "predict" the impact of a given individual. I would guess that mass appeal works better if it is harder to a priori predict the impact of a given person / group - then it becomes more of a numbers' game with getting as many interested people thinking about these issues. I am quite uncertain about whether this is the case, and I imagine there are many other constraints (in e.g. hiring capacity of EA orgs). * I fully agree that this is more of a "nice to have" than a huge value proposition. I'd never heard of the 14 words but I do agree that the similarity is unfortunate. The slogan was also meant more as an illustration rather than a fully fledged proposal - luckily, it facilitates discussions like these!

Thanks for this post - dealing with this phenomenon seems pretty important for the future of epistemics vs dogma in EA. I want to do some serious thinking about ways to reduce infatuation, accelerate doubt, and/or get feedback from distancing. Hopefully that'll become a post sometime in the near-ish future.

So, pulling out this sentence, because it feels like it's by far the most important and not that well highlighted by the format of the post:

what is desired is a superficial critique that stays within and affirms the EA paradigm while it also checks off the boxes of what ‘good criticism’ looks like and it also tells a story of a concrete win that justifies the prize award. Then everyone can feel good about the whole thing, and affirm that EA is seeking out criticism.

This reminds me a lot of a point mentioned in Bad Omens, about a certain aspect of EA which ... (read more)

4
Guy Raveh
2y
Alternatively, for some of the goals and assumptions of EA which are subjective in nature or haven't been (or cannot be) assessed very well, e.g.: * "One ought to do the maximal possible amount of good" * "Donating to charity is a good way to improve the world" * "Representation of the recipients of aid is good, but optional" ...there's value in getting critique that takes those as granted and tries to improve impact, but there may be even greater value in critiques about why the goals and assumptions themselves are bad or lacking.

This seems great! I really like the list of perspectives, it gave me good labels for some rough concepts I had floating around, and listed plenty of approaches I hadn't given much thought. Two bits of feedback:

  • Editing nitpick: I think the perspective called "adaptation-enabling" in the list is instead called "capability-scalable" in the table.
  • The table format worries me. It frames the content as something like "if you have X level of tech optimism and Y level of gov optimism, perspective Z is the strategic view implied by those beliefs". I don't think this
... (read more)
1
MMMaas
2y
Thanks for the catch on the table, I've corrected it! And yeah, there's a lot of drawbacks to the table format -- and a scatterplot would be much better (though unfortunately I'm not so good with editing tools, would appreciate recommendations for any). In the meantime, I'll add in your disclaimer for the table. I'm aiming to restart posting on the sequence later this month, would appreciate feedback and comments.

Personal check-for-understanding: would this be a fair bullet-point summary?

  • Enthusiastically engaging with EA in college != actually having an impact
  • Quantifying the value of an additional high-impact EA is hard
  • Counterfactual impact of a community-builder is unclear, and plausibly negative
  • Assorted optics concerns: lack of rigor, self-aggrandizement, elitism, impersonalness
1
Jelle Donders
2y
To add to this, I would like to emphasize the lack of reasoning transparency in the current estimates as one of our main concerns - and not just the estimates of the value of additional high-impact EAs - but especially those of the value of community building roles at (top) universities and to what degree university groups 'create' these high-impact EAs (which potentially becomes even more dubious when HEAs are used as a proxy metric for impact, for reasons similar to your first bullet point). We originally had these estimates as the main red teaming topic in mind, but we soon figured out there wasn't enough substance to turn this topic into a red team by itself, as the estimates mainly seemed to stem from guesstimates.
2
Kaleem
2y
Yep, that seems right

Yup, existing EA's do not disappear if we go bust in this way. But I'm pretty convinced that it would still be very bad. Roughly, the community dies, even if the people making it up don't vanish. Trust/discussion/reputation dry up, the cluster of people who consider themselves "EA" are now very different from the current thing, and that cluster kinda starts doing different stuff on its own. Further community-building efforts just grow the new thing, not "real" EA.

I think in this scenario the best thing to do is for the core of old-fashioned EA's to basically disassociate with this new thing, come up with a different name/brand, and start the community-building project over again.

But I am also afraid that ... we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them

 

I've had a model of community building at the back of my mind for a while that's something like this:

"New folks come in, and pick up knowledge/epistemics/heuristics/culture/aesthetics from the existing group, for as long as their "state" (wrapping all these things up in one number for simplicity) is "less than the community average". But this is essentially a one way diffusion sort of dynamic, which m... (read more)

6
Arden Koehler
2y
Why would the community average dropping mean we go bust? I'd think our success is more related to the community total. Yes, there are some costs to having more people around who don't know as much, but it's further claim that these would outweigh the benefits.
7
James Brooks
2y
Related to that is "eternal September" https://en.wikipedia.org/wiki/Eternal_September. Each September, when new students joined there was a period where the new users has not learnt the culture and norms, but new users being the minority they did learn the norms and integrate. Around 1993 a flood of new users overwhelmed the existing culture for online forums and the ability to enforce existing norms, and because of the massive and constant influx the norms and culture was permanently changed.  

I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.

This is a great sentence, I will be stealing it :)

However, I think "having good legible epistemics" being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.

I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I've fou... (read more)

Hey, I really like this re-framing! I'm not sure what you meant to say in the second and third sentences tho :/

Question for anyone who has interest/means/time to look into it: which topics on the EA  forum are overrepresented/underrepresented? I would be interested in comparisons of (posts/views/karma/comments) per (person/dollar/survey interest) in various cause areas. Mostly interested in the situation now, but viewing changes over time would be great!

My hypothesis [DO NOT VIEW IF YOU INTEND TO INVESTIGATE]:

I expect longtermism to be WILDLY, like 20x, overrepresented. If this is the case I think it may be responsible for a lot of the recent angst about the relationship between longtermism and EA more broadly, and would point to some concrete actions to take.

2
DavidNash
2y
There was a post on this recently.
1
Kevin Lacker
2y
Even a brief glance through posts indicates that there is relatively little discussion about global health issues like malaria nets, vitamin A deficiency, and parasitic worms, even though those are among the top EA priorities.

This is a thing I and a lot of other organizers I've talked to have really struggled with. My pet theory that I'll eventually write up and post (I really will, I promise!) is that you need Alignment, Agency, and Ability to have a high impact. Would definitely be interested in actual research on this.

1
Abby Hoskin
2y
Sounds really cool! Would love to hear more when you're ready :)

Nice work! Lots of interesting results in here that I think lead to concrete strategy insights.

only 7.4% of New York University students knew what effective altruism (EA) is. At the same time, 8.8% were extremely sympathetic to EA ... Interestingly, these EA-sympathetic students were largely ignorant about EA; only 14.5% knew about it before the survey.

This is a great core finding! I think I got a couple important lessons from these three numbers alone. Outreach could probably be a few times bigger without the proportion of EA students who know about it ge... (read more)

I'm unsure if I agree or not. I think this could benefit from a bit of clarification on the "why this needs to be retired" parts.

For the first slogan, it seems like you're saying that this is not a complete argument for longtermism - just because the future is big doesn't mean its tractable, or neglected, or valuable at the margin. I agree that it's not a complete argument, and if I saw someone framing it that way I would object. But I don't think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It'... (read more)

6
Michael_Wiebe
2y
I do often see it used as an argument for longtermism, without reference to tractability. So: "What matters most about our actions is their very long term effects, but this is hard so in practice we focus on lock-in".  But why bother making the claim about our actions in general? It seems like an attempt to make a grand theory where it's not warranted.

Yes, 100% agree. I'm just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don't know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.

This is good and I want to see explicit discussion of it. One framing that I think might be helpful:

It seems like the cause of a lot of the recent "identity crisis" in EA is that we're violating good heuristics. It seems like if you're trying to do the most good, really a lot of the time that means you should be very frugal, and inclusive, and beware the in-group, and stuff like that.

However, it seems like we might live in a really unusual world. If we are in fact massively talent constrained, and the majority of impact comes from really high-powered talen... (read more)

2
PabloAMC
2y
Hey James! I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.

Talk to u/Infinity, I see them on the EA subreddit every now and then. They singlehandedly provide like 90% of the memes on there, and they're pretty good 👍

2
Linch
2y
Thank you. Can you point them to this post? 

Hi Organizers! The US requires proof of a negative COVID test to enter the country, even for citizens. Will/could you provide some advice or facilities at the conference for getting this? I (and I imagine many others) know literally nothing about the UK health system, am going to have to fly back to the US after the conference, and really don't want to get stuck in airport hell  :/

2
alex lawsen (previously alexrjl)
2y
(not an organiser but live in london) I've been recommended https://www.expresstest.co.uk/, and also biogroup in shoreditch. Both offer same-day results.

Oop, thanks for correction. To be honest I'm not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho  >:/

I'll hop on the "I'd love to see sources" train to a certain extent, but honestly we don't really need them. If this is happening it's super important, and even if it isn't happening right now it'll probably start happening somewhat soon. We should have a plan for this.

Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:

  • Longtermism produces a strategic focus on "the last person" that this "near-term x-risk" view doesn't. This isn't super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyon
... (read more)

S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.

Why not?

Suggestion: the Future Fund should take ideas on a rolling basis, and assess them in rounds. EA is the kind of community where potentially good ideas bubble up all the time, and it would be a real shame if those were wasted because the funders only listen during narrow windows. Having an open drop-box to submit ideas costs FF almost nothing, and makes a bias-towards-action and constant passive brainstorming much easier.

Context: this idea

~20, assuming trend is linear. If it's exponential, god help us all

There's a 5-minute video about this kind of thing from Rob Miles:

I guess the takeaway is something like:

Counter-framing: AI alignment [via ambitious value learning] as analogous to [figuring out how to build a system that won't destroy the world when you try and train it like] raising a child.

If this were fiction that would make Buck your manic-pixie-dream-girl and I find that hilarious.

9
gwern
2y
I humbly request a photo of Buck et al in a van with the caption "Get in loser we're going saving".

+1 to all the other resources in these answers, but never underestimate how useful it is to just get started! I keep this link bookmarked, which shows the currently-open Metaculus questions which will close soonest. Making quick predictions on these questions keeps the feedback loop as tight as possible (although it's still not that tight to be honest).

Also, Superforecasting is great but longer than it needs to be, I've heard that there are good summaries out there but don't personally know where they are. 

4
Linch
2y
I like this summary from AI Impacts.

This looks great! I'm concerned that it won't get the traffic it needs to be useful to people. Have you considered/attempted reaching out to 80K to put a link on the job board or something? That's my go-to careers resource, and I think the main way I could learn about the existence of something like this once this post is off the front page.

Anecdotally, I've found that describing EA as "a community of people trying to do as much good as possible with our time and money" gets good response. 

Agree that this is worth a shot, would be Huge if it worked. But it seems like Mr Beast and Mark Rober might be selecting causes to avoid controversy, which would make it hard to get EA through. Both of their platforms are mainly built on mass appeal. Planting trees and cleaning up the oceans are extremely uncontroversial causes - nobody is out there arguing that they do net harm. This is not the case with EA.

That said, if any of you folks went to high school with Mark Rober or something, I would still be extremely excited to try this. I have a 3rd or 4th degree connection to him, but that seems a bit too far to do much of anything.

1
Alex Barnes
2y
Color the Spectrum was controversial in the autistic community; I'm eager to see if Mark Rober and Mr Beast learn from the feedback and add more Autistic Self-Advocacy for 2022!
3
Heye Groß
2y
That's why I would suggest something like GiveDirectly or AMF, those seem uncontroversial (but probably a lot more effective than planting trees)

Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I'm confused by the line "I could create just a little more hedonium". My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?

I ended up interpreting things as if "hedonium" was meant to mean "utility", and the narrator is deciding what their last thought should be - how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly - or if I was incorrect, I hope this feedback is helpful  :)

3
Visa Om
3y
As Roland Barthes says 'the author is dead', but, in my book, your interpretation is right on the money. I liked your interpretation of how to create hedonium in such a circumstance!

...it was beautiful. And that is good.

~fin

Bro this is really scary. Well done.

Observation: prion-catalysis or not, any vaccine-evasion measures at all seem extraordinarily dangerous. For a highly infectious threat, the fastest response we have right now is mass vaccine manufacture, and that seems just barely fast enough. But our vaccine tech is public knowledge, and an apocalyptic actor can take all the time they want to design a countermeasure. 

Once a threat with any sort of countermeasure is released, we first have to go through a vaccine development cycle to find that out in the first plac... (read more)

I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable. 

What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.

There's so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.

1
Michael_2358
3y
I agree on the challenges of deploying results. I think the primary value in public health research is empowering individuals to make good decisions for themselves. For example, sites like WedMD and Healthline add a lot of value for individuals trying to improve their families' health. I don't think the answer is already out there on obesity and many other chronic diseases. If it is, I would appreciate someone directing me to it. :)

Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).

I agree with most of your assessment here. But I think rather than "simple altruism", it would be better to focus on "altruistic intent". Making this substitution doesn't change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.

That said, increasing altruistic intent... (read more)

1
LiaH
3y
I agree!  With both your points on renaming it "altruistic intent", and the reasons behind.  I thought perhaps improving altruistic intent must be somewhere under the EA radar, but in the very superficial reading I have done to date, I had not found it. I will look more specifically now at broad longtermism. To be honest, I was also hoping the EA community had more skills in persuasion and politics, and was already working on it.  Finally, thank you for acknowledging my neophyte attempts on a front page post. It took a lot of internal debate and self-talk to write it  ;)

I think this definition of "cause area" is roughly how the EA community uses the term in practice, and explains a lot of why/how it's useful. It helps facilitate good discussion by pointing towards the best people to talk to, since others in my cause area will have common knowledge and interests with myself and each other. On this view, "cause area" is just EA-speak for a subcommunity.

That makes it a bit hard to justify the common EA practice of "cause prioritization" though, since causes aren't really particularly homogeneous with regard to their impact. I think doing "intervention prioritization" would be a lot more useful, even though there's way more interventions than causes.

Is there some kind of up-to-date dashboard or central source for GiveWell's main "cost-per-expected-life" figure? 

  • The Metaculus question mentioned in this post cites values like $890 in 2016,  $823 in 2017, $617 in 2018 and $592 in 2019, and I can't find the field they refer to in the resolve condition (?!)
  • This 80K article lists the value as $2300 in 2020.
  • This GiveWell summary sheet from 2016 has a minimum value of $901
  • GiveWell's Top Charities page lists $3000-$5000 to save a life for Malaria Consortium, Against Malaria Foundation, New Incentives
... (read more)
5
WilliamKiely
3y
(1) The Metaculus question adjusts numbers for inflation to 2015 dollars, so they wouldn't appear explicitly in GiveWell's spreadsheets. (2) Note that there's a distinction between "outcome as good as saving a life" and "cost per life saved". The $890 number is (GiveWell's 2016 estimate of) the former while the $3,000 - $5,000 is the latter. The former includes good done by reducing the probability that people die as well as good done by raising peoples' incomes, which at some point is equivalently good to averting a death. Pablo's comment here says: "As far as I can tell, the 2020 version of GiveWell's cost-effectiveness analysis no longer employs the category of "outcome as good as saving a life". I haven't been keeping up with GiveWell's updates in the last year or two and am merely speculating, but perhaps GiveWell no longer employs the metric "outcome as good as a saving a life" (??). Hopefully someone else can answer this with confidence. I assume your citation of GiveWell's Top Charities page listing $3000-$5000 to save a life is the closest they have to a an up-to-date dashboard or central source number, and they're just choosing to advertise that number (a number in terms of cost to save a life) rather than advertise a cost to produce an "outcome as good as saving a life."

I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I'm not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to pr... (read more)

1[anonymous]3y
another incentive system/component I have seen is that forums will allow users not only to upvote but to give other incentives to good answers. stackoverflow has bounty, and reddit coins
6
Matthew_Barnett
3y
Good ideas. I have a few more, * Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty. * Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best  summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one. * Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. "You aren't eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong." This would ensure that only people from within the community can contribute to certain questions. * Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.

If it costs $4000 to prevent a death from malaria, malaria deaths happen at age 20 on average, and life expectancy in Africa is 62 years, then the cost per lifetime saved is $0.0109/hour. 

If you make the average US income of $15.35/hour, this means that every marginal hour you work to donate can be expected to save 1,412 hours of life, if you take the very thoroughly researched, very scalable, low-risk baseline option. If you can only donate 10% of your income, then your leverage is reduced to a mere 141.2. Just by virtue of having been born in a deve... (read more)