New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
25
New? Start here! (Useful links)
Lizka
· 1y ago · 2m read
24
Open Thread: April — June 2023
Lizka
· 2mo ago · 1m read

Posts tagged community

Shortform

34
1d
Reddit user blueshoesrcool [https://old.reddit.com/user/blueshoesrcool] discovered [https://old.reddit.com/r/SneerClub/comments/13t23ti/effective_ventures_misses_reporting_deadline/] that Effective Ventures [https://ev.org/] (the umbrella organization for the Centre for Effective Altruism, 80000 hours, GWWC, etc) has missed its charity reporting deadline by 27 days [https://register-of-charities.charitycommission.gov.uk/charity-search/-/charity-details/5026843/accounts-and-annual-returns]. Given that there's already a regulatory inquiry into Effective Ventures Foundation [https://forum.effectivealtruism.org/posts/C89mZ5T5MTYBu8ZFR/regulatory-inquiry-into-effective-ventures-foundation-uk], maybe someone should look into this.
3
3h
One of my current favorite substacks [https://weibo.substack.com/]: this author just takes a random selection of Weibo posts every day and translates them to English, including providing copies of all the videos. Weibo is sort of like "Chinese Twitter". One of my most consistently read newsletters! H/T to @JS Denain [https://forum.effectivealtruism.org/users/js-denain-1?mention=user] for recommending this newsletter to me a while ago :)
46
4d
1
Protesting at leading AI labs may be significantly more effective than most protests, even ignoring the object-level arguments for the importance of AI safety as a cause area. The impact per protester is likely unusually big, since early protests involve only a handful of people and impact probably scales sublinearly with size. And very early protests are unprecedented and hence more likely (for their size) to attract attention, shape future protests, and have other effects that boost their impact.
8
2d
9
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job. I don't think they were involved in hiring, but I don't think anyone should hold this view. Here is why: * As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that's not the case, get a better interview process, don't start being prejudiced! * People don't mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don't have to worry about this. People are very sensitive to this. Let's agree not to defect. We judge on our best guess of your performance, not on appearances.  * I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn't hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal? No! We already think some information is irrelevant/inadmissible as a prior in hiring. Because we are glad of people's right to be different or themselves. To me, race and religion clearly fall in this space. I want people to feel they can be human and still have a chance of a job.  * I wouldn't be surprised if this cashed out to "I hire people like me". In this example was the individual really hiring on the basis of merit or did they just find certain religious people hard to deal with. We are not a social club, we are trying to do the most good. We want the best,
7
3d
4
Contra claims like here [https://forum.effectivealtruism.org/posts/Q6PSmdkuMaxosxAS5/strong-evidence-is-common] and here [https://forum.effectivealtruism.org/posts/N6hcw8CxK7D3FCD5v/existential-risk-pessimism-and-the-time-of-perils-4?commentId=yoDrBcyLNnwCqF8hb], I think extraordinary evidence is rare for probabilities that are quasi-rational and mostly unbiased, and should be quite shocking when you see it. I'd be interested in writing an argument for why you should be somewhat surprised to see what I consider extraordinary evidence[1]. However, I don't think I understand the "for" case for extraordinary evidence being common[2], so I don't understand the case for it and can't present the best "against" case. [1] operationalized e.g. as a 1000x or 10000x odds update on a question that you've considered for at least an hour beforehand, sans a few broad categories of cases. [2] fittingly this argument is internally consistent; from my perspective "extraordinary evidence is common" is an extraordinary claim that demands extraordinary evidence before it's plausible, whereas I presume proponents of that proposition don't believe this.
Load more

Recent discussion

Question

Do you think decreasing the consumption of animals is good/bad? For which groups of farmed animals?

Context

I stopped eating animals 4 years ago mostly to decrease the suffering of farmed animals[1]. I am glad I did that based on the information I had at the time, and published in online journals of my former university a series of 3 articles whose title reads "Why we should decrease the consumption of animals?". However, I am no longer confident that decreasing the consumption of animals is good/bad. It has many effects:

  • Decreasing the number of factory-farmed animals.
    • I believe this would be good for chickens, since I expect them to have negative lives. I estimated the lives of broilers in conventional and reformed scenarios are, per unit time, 2.58 and 0.574 times as
...
3
Vasco Grilo
1h
My understanding was that, in the though experiment you described, "the victim" would have a good/positive life. By definition, having a good/positive life means it being lived is better than it not existing, everything else equal. So there is a sense in which I would be happy to add positive lives to the universe regardless of what they involve, as long as the expected total hedonistic utility of the rest of the universe did not decrease (or decreased less than the extra utility coming from the added life). In practice, the situation you described being good is highly implausible.
8
Jason
1h
I wonder how the magnitude of these various effects would scale between 1 additional percent of the population going vegan vs. 10 percent vs. 100 percent. Specifically, at least in the medium run, it seems that many of the inputs of producing crops are sunk costs, and that a reduction in the demand for farmed animal feed would cause agricultural producers to keep growing many crops but try to sell them as biofuel material. At least here in the US, Big Corn is powerful and could probably get Congress to mandate / subsidize more biofuel use to a certain point. Or they could probably get some other method to maintain a crop market and keep the farmers / ag lobby content, again to a certain point. So I am not confident a small-to-medium refuction in animal feed demand would ultimately impact crop acreage that much. I don't know about other markets though!

Great point, Jason!

So I am not confident a small-to-medium refuction in animal feed demand would ultimately impact crop acreage that much.

I agree confidence is not warranted.

I don't know about other markets though!

It looks like there is significant variation across countries (the relevant metric is per capita production, but I did not immediately find it):

Jason
1h20

There's also the fact that, as a society and subject to certain exceptions, we've decided that employers shouldn't be using an employee's religious beliefs or lack thereof as an assessment factor in hiring. I think that's a good rule from a rule-utilitarian framework. And we can't allow people to utilize their assumptions about theists, non-theists, or particular theists in hiring without the rule breaking down.

The exceptions generally revolve around personal/family autonomy or expressive association, which don't seem to be in play in the situation you describe.

OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.

We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. We’ll be hosting tournaments in Boston, London, and San Francisco in the fall — see our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!

 

What happened at the competition?

Attendance

114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but...

4
Harrison Durland
17h
I was also going to recommend this, but I’ll just add an implementation idea (which IDK if I fully endorse): you could try to recruit a few superforecasters or subject-matter experts (SMEs) in given field to provide forecasts on the questions at the same time, then have a reciprocal scoring element (I.e., who came closest to the superforecasters’/SMEs’ forecasts). This is basically what was done in the 2022 Existential Risk Persuasion/Forecasting Tournament (XPT), which Philip Tetlock ran (and I participated in). IDK when the study results for that tournament will be out, and maybe it won’t recommend reciprocal scoring, but it definitely seems worth considering. A separate idea (which again IDK if I fully endorse but was also in the XPT): have people provide dense rationales for a few big forecasts, then you can rate them on the merits of their rationales. (Yes, this involves subjectivity, but it’s not very different from speech and debate tournaments; the bigger problem could be the time required to review the rationales, but even this definitely seems manageable, especially if you provide a clear rubric, as is common in some competitive speech leagues.)
Jason
1h20

A trial of #2 would have some information value -- you could discern how strong the correlation was between the rationale scores and final standings to decide if rationales were a good way to produce a same-week result.

Maybe you could also use idea #1 with only the top-scoring teams making it to the rationale round, to cut down on time spent scoring rationales?

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Dear EA Forum readers,

The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!

As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability.  We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering.  

Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.  

The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.  

Our lawsuit attacks the notion that Big Ag is above the law.  We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty. 

Case Farms...

alene
2h10

Thank you Fai!!!!

1
alene
2h
Thank you so much Constance!

One of my current favorite substacks: this author just takes a random selection of Weibo posts every day and translates them to English, including providing copies of all the videos. Weibo is sort of like "Chinese Twitter".

One of my most consistently read newsletters! H/T to @JS Denain for recommending this newsletter to me a while ago :)

[This post was written in a purely personal capacity, etc.]

I[1] recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.

Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and it’s unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.

I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.

High-level questions

Likelihood of great power conflict

It seems like the Metaculus forecasting community is now...

Conditional on Russia losing, is the world a safer place?
I think maybe a bit, in a general “don’t reward conquest” sort of way


 I would like to add another reason in favor: Russia broke the https://en.wikipedia.org/wiki/Budapest_Memorandum where they and other states provided security guarantees to post-Soviet states (including Ukraine) to hand over their nuclear weapons. If Russia wins this war it clearly sends a message that one should never get rid of nukes since it increases the risk of an invasion. I mean it has already sent these signals since th... (read more)

1
Jamie Elsey
8h
  I'm not sure why this is deeply confusing. I don't think we should be assessing whether or not authoritarian regimes are bad or not based on measures of life satisfaction, and if that is what one wants to do then certainly not contemplating it via a 1v1 comparison of just two countries.  Is the claim that they are not that different on this metric true - where is the source for this and how many alternative sources or similar metrics are there? If true, are all the things that feed into people's responses to a survey about life satisfaction in these different places the same (how confident are they that they can give their true opinions, and how low have their aspirations or capacity to contemplate a flourishing life become), and are the measures representative of the actual population experience within those countries (what about the satisfaction of people in encampments in China that help sustain the regime and quash dissent)? Even granted that the ratings really reflect all the same processes going on in each country and that it is representative, Taiwan lives under threat of occupation and invasion, and there are many other differences between the two countries. The case is then just a confounded comparison of 1 country vs  1 other, which is not an especially good comparison of whether the one variable chosen and used to define those countries makes a difference or not.
2
Paul Currion
9h
Has this line of thinking led you to consider whether it's a good use of anybody's time to pay attention to geopolitical events unless they are directly connected to their life in some way - through family or work, or (at a stretch) through participation in a forecasting tournament? A minimal level of engagement is warranted simply because we want to be citizens of the world, but diminishing returns appear to set in incredibly quickly. It seems to be, as you imply, an inefficient use of time that can actually distract us from more important activities.

The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.)[1] We're presumably missing out on a lot of talent.

I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's...

I would also strongly recommend having a version of the fellowship that aligns with US university schedules, unlike the current Summer fellowship!

2
Harrison Durland
4h
Given the (accusations of) conflicts of interest in OpenAI’s calls for regulation of AI, I would be quite averse to relying on OpenAI for funding for AI governance
4
Harrison Durland
4h
This situation was somewhat predictable and avoidable, in my view. I’ve lamented the early-career problem in the past [https://forum.effectivealtruism.org/posts/HZacQkvLLeLKT3a6j/how-might-a-herd-of-interns-help-with-ai-or-biosecurity]but did not get many ideas for how to solve it. My impression has been that many mid-career people in relevant organizations put really high premiums on “mentorship,” to the point that they are dismissive of proposals that don’t provide such mentorship.  There are merits to emphasizing mentorship, but the fact has been that there are major bottlenecks on mentorship capacity and this does little good for people who are struggling to get good internships. The result for me personally was at least ~4 internships that were not very relevant to AI governance, were not paid, and did not provide substantial career benefits (E.g., mentorship). In summary, people should not let the perfect be the enemy of the good: I would have gladly taken an internship working on AI governance topics, even if I had almost no mentorship (and even if I had little or no compensation). I also think there are ways of substituting this with peer feedback/engagement. I have multiple ideas for AI governance projects that are not so mentorship-dependent, including one pilot idea that, if it worked, could scale to >15 interns and entry-level researchers [https://forum.effectivealtruism.org/posts/9RCFq976d9YXBbZyq/research-reality-graphing-to-support-ai-policy-and-more] with <1 FTE experienced researcher in oversight. But I recognize that the ideas may not all be great (or at least their merits are not very legible). Unfortunately, we don’t seem to have a great ecosystem for sharing and discussing project ideas [https://forum.effectivealtruism.org/posts/Zcy8EDfQ9TXFGL75m/platform-for-project-spitballing-e-g-for-ai-field-building], at least if you aren’t well connected with people to provide feedback through your job or through HAIST/MAIA or other university groups.

1/ Introduction

We recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).
 

XKCD

Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond. 

As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we...

4
Vasco Grilo
12h
Great work! I really appreciate how FP Climate's work is relevant to the broader project of effective altruism, and decision-making under uncertainty. Heuristics like FP Climate's impact multipliers can be modelled, and I am glad you are working towards that. I wish Open Philanthropy moved towards your approach, at least in the context of global health and wellbeing where there is less uncertainty. Open Philanthropy has a much larger team and moves much more money that FP, so I am surprised with the low level of transparency, and lack of rigorous comparative approaches in its grantmaking.
8
jackva
7h
Thanks, Vasco! I think it is hard to judge what exactly OP is doing given that they do not publish everything and probably (and understandably!) also have a significant backlog. But, directionally, I strongly agree that the lack of comparative methodology in EA is a big problem and I am currently writing a post on this. I think, to a first approximation, I perceive the situation as follows: Top-level / first encountering a cause: * ITN analysis, inherently comparative and useful when approaching an issue from ignorance (the impact-differentiating features of ITN are very general and make sense as approximations when not knowing much), but often applied in a way below potential (e.g. non-comparable data, no clear formalization of tractability) Level above specific CEAs: * In GHD, stuff like common discounts for generalizability * In longtermism, maybe some templates or common criteria A large hole in the middle: It is my impression that there is a fairly large space of largerly unexplored "mid-level" methodology and comparative concepts that could much improve relative impact estimates across several domains. These could be within a cause (which is what we are trying to do for climate), but also portable and/or across cause, e.g. stuff like: * breaking down "neglectedness" into constituent elements such as low-hanging fruits already picked, probability of funding additionality, probability of activity additionality, with different data (or data aggregations) available for either allowing for more precise estimates relatively cheaply, improving on first-cut neglectedness estimates. * what is the multiplier from advocacy and how does this depend on the ratio of philanthropic to societal effort for a problem, the kind of problem (how technical? etc.), and location? * how do we measure organizational strength and how important is it compared to other factors * what returns should we expect from engaging in different regi

Thanks for sharing your thought! They seem right to me. A typical argument against "overall-comparative-methodology-and-estimate building" is that the opportunity cost is high, but it seems worth it on the margin given the large sums of money being granted. However, grantmakers have disagreed with this at least implicitly, in the sense the estimation infrastructure is apparently not super developped.

Here’s a version of the database that you filter and sort however you wish, and here’s a version you can add comments to.

Update: I've been slow to properly update the database, but am collecting additional orgs in this thread for now.

Key points

  • I’m addicted to creating collections and have struck once more.

  • The titular database includes >130 organizations that are relevant to people working on longtermism- or existential-risk-related issues, along with info on:

    • The extent to which they’re focused on longtermism/x-risks
    • How involved in the EA community they are
    • Whether they’re still active
    • Whether they aim to make/influence funding, policy, and/or career decisions
    • Whether they produce research
    • What causes/topics they focus on
    • What countries they’re based in
    • How much money they influence per year and how many employees they have[1]
  • I aimed for (but likely missed) comprehensive coverage

...

Confido Institute

Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.

We design tools, workshops and materials to support this mission. This is the first in

... (read more)
2
MichaelA
5h
Epistea [https://forum.effectivealtruism.org/posts/FrshKTu34cFGGsyka/announcing-a-new-organization-epistea]