This is just a brief reminder announcement that CEARCH is running a small cause exploration contest. The original announcement can be found here; the short of it is that:

  • We invite people to suggest potential cause areas, providing a short justification if you feel it useful (e.g. briefly covering why the issue is important/tractable/neglected), or not, if otherwise (e.g. the idea simply appears novel or interesting to you). All ideas are welcome, and even causes which do not appear intuitively impactful can be fairly cost-effective upon deeper research.
  • People are also welcome to suggest potential search methodologies for finding causes (e.g. consulting weird philosophy, or looking up death certificates).

Prizes will be awarded in the following way:

  • USD 300 for what the CEARCH team judges to be the most plausibly cost-effective and/or novel cause idea (and that is not already on our public longlist of causes).
  • USD 700 for what the CEARCH team judges to be the most useful and/or novel search methodology idea (and that is not already listed in our public search methodology document).

Entries may be made here. The contest will run for another 2 weeks, until 31st July 2023 (23:59, GMT-12). Multiple entries are allowed (n.b. do make separate individual submissions). The detailed rules, for those who are interested, are available here.


 

17

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

Any other points worth highlighting from the 10-page long rules? I find it confusing. Is this normal for legalspeak? The requirements include, and I quote:

  • All information provided in the Entry must be true, accurate, and correct in all
    respects. [oops, excludes nearly all possible utterances I could say]
  • The Contest is open to any natural person who meets all of the following eligibility
    requirements:
    • [Resides in a place where the Contest is not prohibited by law]
    • The entrant is at least eighteen (18) years old at the time of entry.
    • The entrant has access to the internet. [What?]

Hi Rime,

We based the requirements off the Open Philanthropy Cause Exploration Prize's official rules - see the full legal terms linked to here (https://www.causeexplorationprizes.com/rules-faqs) - and changed them only when necessary. Then it was vetted by the lawyer at CEARCH's fiscal sponsor.

I can't speak for the lawyers, but my presumption as a non-expert is that there are good legal reasons for the various clauses. For example, the prohibited-by-law stuff is obvious enough; and I imagine the access-to-internet-clause is ensure no administrative difficulties with contacting winners after the fact and getting the details needed to wire them their money.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra