New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
46
tlevin
2d
3
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.
Is EA as a bait and switch a compelling argument for it being bad? I don't really think so 1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial beliefs at a "bring your friends" service? What about wearing nice clothes to a first date? [1] 2. EA is a big movement composed of different groups[2]. Many describe it differently. 3. EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit#gid=9418963 4. EA is way more transparent than any comparable movement. If it is a bait and switch then it does so much more to make clear where the money goes eg (https://openbook.fyi/). On the other hand: 1. I do sometimes see people describing EA too favourably or pushing an inaccurate line.   I think that transparency comes with a feature of allowing anyone to come and say "what's going on there" and that can be very beneficial at avoiding error but also bad criticism can be too cheap.  Overall I don't find this line that compelling. And that parts that are seem largely in the past when EA was smaller (when perhaps it mattered less). Now that EA is big, it's pretty clear that it cares about many different things.  Seems fine.  1. ^ @Richard Y Chappell created the analogy.  2. ^ @Sean_o_h argues that here. 
Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to?  (Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)

Popular comments

Recent discussion

Dan H commented on Introducing AI Lab Watch 34m ago
This is a linkpost for https://ailabwatch.org

I'm launching AI Lab Watch. I collected actions for frontier AI labs to improve AI safety, then evaluated some frontier labs accordingly.

It's a collection of information on what labs should do and what labs are doing. It also has some adjacent resources, including a list...

Continue reading

I mean Google does basic things like use Yubikeys where other places don't even reliably do that. Unclear what a good checklist would look like, but maybe one could be created.

3
Linch
6h
The broader question I'm confused about is how much to update on the local/object-level of whether the labs are doing "kind of reasonable" stuff, vs what their overall incentives and positions in the ecosystem points them to doing.  eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta's alleged "scorched Earth" strategy where they are trying very hard to commoditize the component of LLMs.  1. ^ eg GDM employees have Google/Alphabet stock, most of the variance in their earnings isn't going to come from AI, at least in the short term.
1
ixex
10h
I think getting attention would increase the impact of this project a lot and is probably pretty doable if you are able to find an institutional home for it. I agree with Yanni's sentiment that it is probably better to improve on this project than to wait for another one that is more optimized for public attention to come along (though am curious why you think the latter is better).

We've merged the newsletters from aisafety.training and aisafety.events to create one clean, comprehensive weekly email covering newly announced events and training programs in the AI safety space.

Events and training programs are important for the ecosystem to grow and mature, so we wanted to make it as easy as possible for people to find and sign up to those relevant to them – both online and in-person. It's the reason we built those two websites in the first place, and we think this combined newsletter will help the information on those sites reach even more people. As a side note, we have also created a merged version at aisafety.com/events-and-training.

The newsletter will typically be comprised of four sections:

  1. Newly announced events
  2. Newly announced training programs
  3. Any date changes for those previously announced
  4. Open calls

We're aiming to bring a wide selection of AI...

Continue reading

Epoch AI is looking for an Operations Associate to help us grow and support our team. This person will manage our recruiting, onboarding, and offboarding processes, and generally support our staff in various other ways, helping our organization thrive. The successful candidate will report to me, Maria de la Lama, and work closely the rest of our 14-people team and our fiscal sponsor's operations team.

This role is full-time, fully remote, and we are able to hire in many countries. Apply by May 15!

Key Responsibilities

  • Recruiting and helping our team grow
    • The person in this role would manage our hiring rounds, handle outreach for our open roles, communicate with candidates, and help streamline our hiring processes to enable us to grow sustainably and hire the right staff.
    • They would also coordinate occasional visa sponsorship processes, and own onboarding & offboarding processes for contractors
...
Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Epoch AI is looking for a Researcher on the Economics of AI to investigate the economic basis of AI deployment and automation. The person in this role will work with the rest of our team to build out and analyse our integrated assessment model for AI automation, research the economics of AI training and inference, and build models to help forecast AI’s development and its impact on the economy. 

The primary activities of this position include building new and updating existing models of AI automation, development and growth, and advancing original research into the economics of AI development and deployment. Over the course of a year, we anticipate this person will have produced 2-4 leading reports on the economics of AI. The successful candidate will report to Tamay Besiroglu, associate director of Epoch AI, and work closely with the rest of Epoch’s research team.

Some examples of projects...

Continue reading

My favorite EA blogger tells the story of an early abolitionist. 

The subtitle, "somewhat in favor of guilt", is better than any summary I'd write.

John Woolman would probably be mad at me for writing a post about his life. He never thought his life mattered.


Partially

...
Continue reading
4
tobytrem
16h
The book Strangers drowning is great and includes profiles of a bunch of these people. It's ultimately fairly agnostic about whether they are doing the right thing (it emphasises the extreme upper end of altruists), but there are moments of inspiration in most of the stories for people committed to doing good. 

I really liked the book.

2
NickLaing
11h
Second recommendation due this in a week, definitely going to read thanks so much
49
52

Important note: This post is no longer highlighted on the front page. If you are looking to hire via the EA Forum, consider writing a post giving information about the role. If you are an organisation hiring for several roles, consider hosting an AMA. Contact me if you ...

Continue reading

Epoch AI is looking for an Operations Associate to help us grow and support our team. This person will manage our recruiting, onboarding, and offboarding processes, and support our staff in various other ways to help our organisation thrive. It's a great opportunity for generalists looking to contribute to a mission-driven team working on making sure AI goes well.

Salary: $60,000 to $70,000 pre-tax depending on previous experience. This role is open to full time candidates only.

Location: Remote. We are a fully remote organization, and we are able to hire in... (read more)

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

Thanks for doing this AMA, Lewis! 

What's your take on when a promising intervention seems cost-effective enough to be tried? Do you think we should be using something akin to GiveWell's approach, piloting stuff that's estimated to be, e.g., ~10x more cost-effective than further cage-free campaigns, or...? I realise your opinion on this might not correspond to OP's overall stance, but I'd love to hear your thoughts about such existent and upcoming benchmarks and thresholds within EAA. Thank you! 

5
ixex
12h
If you had to place the different kinds of work within farmed animal welfare (e.g. corporate pressure campaigns, alternative proteins, persuading people to be vegan, etc) into different tiers based on how optimistic you are about them (e.g. 'very optimistic', 'moderately optimistic', etc) what would they look like?
11
Vasco Grilo
15h
To what extent does Open Philanthropy use Rethink Priorities' welfare ranges to compare interventions targetting different species? What else does OP use?

Is EA as a bait and switch a compelling argument for it being bad?

I don't really think so

  1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial
...
Continue reading

I think that there might be something meaningfully different between wearing nice clothes to a first date (or a job interview), as opposed to intentionally not mentioning more controversial/divisive topics to newcomers. I think there is a difference between putting your best foot forward (dressing nice, grooming, explaining introductory EA principles articulately with a 'pitch' you have practices) and intentionally avoiding/occluding information.

For a date, I wouldn't feel deceived/tricked if someone dressed nice. But I would feel deceived if the person intentionally withheld or hid information that they knew I would care about. (it is almost a joke that some people lie about age, weight, height, employment, and similar traits in dating).

I have to admit that I was a bit turned off (what word is appropriate for a very weak form of disgusted?) when I learned that there has long been an intentional effort in EA to funnel people from global development to long-termism within EA.

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated...

Continue reading

Yeah seems fair.

But CEA/EVF have -- rightfully -- mostly disowned any idea that they (or any other specific entity) decide what is or isn't a valid or correct way to practice effective altruism.

Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour. 

Seems like they are the de facto people who decide what is or isn't valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable). 

I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn't seem to be the case. 

14
4

EA is very important to me. I’ve been EtG for 5 years and I spend many hours per week consuming EA content. However, I have zero EA friends (I just have some acquaintances).

(I don't live near a major EA hub. I've attended a few meetups but haven't really connected with ...

Continue reading

In the EA Anywhere Slack (which you can join here) there are semi-regular "random matches" with other members. Not every match will be somebody that you click with or have chemistry with, but if you are looking to meet new people and you don't live in a major city, it might be helpful.

It is quite challenging to build friendships without the in-person element. All of the friends that I have made are people that I have either met in-person, or people that I interacted with somewhat online and later met in-person.

EDIT: oh, I just remembered that you could joi... (read more)

4Answer by James Herbert11h
Perhaps by starting a group? You don’t have to live in a major hub to start a group :) Do you mind sharing approximately where in the world you live?
2Answer by Bella13h
I made a lot of my early friends in EA through my local group. I'm guessing you don't have one since you said you're not in an EA hub (?) but there's always EA Anywhere. You could also organise an online discussion group yourself — a couple of my closest friends today were people I met because I started an online discussion group on animal welfare during the pandemic. We would discuss an article or paper on animal advocacy for like an hour in the evening, and then some people would stay and chat all evening. It was really nice :)