New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
24
Lizka
· 1y ago · 2m read
22
Lizka
· 3mo ago · 1m read

Posts tagged community

Quick takes

Should we be making it so difficult for users with an EA forum account to make updates to the forum wikis?    I imagine the platform vision for the EA forum is to be the "Wikipedia for do-gooders" and make it useful as a resource for people working out the best ways to do good. For example, when you google "Effective Altruism AI Safety" on incognito mode - the first result is the forum topic on AI safety:  AI safety - EA Forum (effectivealtruism.org) I was chatting to @Rusheb about this who has spent the last year upskilling to transition into AI Safety from software development. He had some great ideas for links (ie. new 80k guides, site that had links for newbies or people making the transition from software engineering) Ideally someone who had this experience and opinions on what would be useful on a landing page for AI Safety should be able to suggest this on the wiki page (like you can do on Wikipedia with the caveat that you can be overruled).  However, he doesn't have the forum karma to do that and the tooltip explaining that was unclear on how to get the karma to do it.  I have the forum karma to do it but I don't think I should get the credit - I didn't have the AI safety knowledge - he did. In this scenario, the forum has lost out on some free improvements to its wiki plus an engaged user who would feel "bought in". Is there a way to "lend him" my karma?  I got it from posting about EA Taskmaster which shouldn't make me an authority on AI Safety.
Radar speed signs currently seem like one of the more cost effective traffic calming measures since they don't require roadwork, but they still surprisingly cost thousands of dollars. Mass producing cheaper radar speed signs seems like a tractable public health initiative
Starting more free https://www.parkrun.com/ 5k runs would be a great way to improve public health and connect people interested in improving health   (for local EA groups interested in volunteering ideas)
A brief thought on 'operations' and how it is used in EA (a topic I find myself occasionally returning to). It struck me that operations work and non-operations work (within the context of EA) maps very well onto the concept of staff and line functions. Line function are those that directly advances an organization's core work, while staff functions are those that do not. Staff functions have advisory and support functions; they help the line functions. Staff functions are generally things like accounting, finance, public relations/communication, legal, and HR. Line functions are generally things like sales, marketing, production, and distribution. The details will vary depending on the nature of the organization, but I find this to be a somewhat useful framework for bridging concepts between EA and the broader world. It also helps illustrate how little information is conveyed if I tell someone I work in operations. Imagine 'translating' that into non-EA verbiage as I work in a staff function. Unless the person I am talking to already has a very good understanding of how my organization works, they won't know what I actually do.
Chevron deference is a legal doctrine that limits the ability of courts to overrule federal agencies. It's increasingly being challenged, and may be narrowed or even overturned this year. https://www.eenews.net/articles/chevron-doctrine-not-dead-yet/ This would greatly limit the ability of, for example, a new regulatory agency on AI Governance to function effectively. More: * This argues it would lead to regulatory chaos, and not simply deregulation: https://www.nrdc.org/stories/what-happens-if-supreme-court-ends-chevron-deference * This describes the Koch network influence on Clarence Thomas. The Kochs are behind the upcoming challenge to Chevron: https://www.propublica.org/article/clarence-thomas-secretly-attended-koch-brothers-donor-events-scotus

Recent discussion

Our team at Our World in Data just launched a new page on animal welfare! There you can find a brand new Animal Welfare Data Explorer, 22 interactive charts, and 4 new articles:

On Our World in Data, we cover many topics related to reducing human suffering: alleviating poverty, reducing child and maternal mortality, curing diseases, and ending hunger.

But if we aim to reduce total suffering, society’s ability to reduce this in other animals – which feel pain, too – also matters.

This is especially true when we look at the numbers: every year, humans slaughter more than 80

...

Great work – this looks really useful! 

Minor comment: A few years ago, I looked into estimates of the ratio of animal lives lost to a kilogram of animal protein. One of the facts that were really striking to me was how much the ratio has changed over time in the US for many animal protein products (e.g., dairy cows produce significantly more milk now than they used to). Given how much the ratio has changed over time, it seems likely that there is also a fair bit of heterogeneity between countries. For the OWID charts that display "Animal lives lost pe... (read more)

From the looks of it, next week might be rough for people who care about Effective Altruism. As CEA acting CEO Ben West pointed out on the forum:

“Sam Bankman-Fried's trial is scheduled to start October 3, 2023, and Michael Lewis’s book about FTX comes out the same day. My hope and expectation is that neither will be focused on EA …

Nonetheless, I think there’s a decent chance that viewing the Forum, Twitter, or news media could become stressful for some people, and you may want to pre-emptively create a plan for engaging with that in a healthy way. 

I really appreciated that comment since I didn’t know that and I’m glad I had time to mentally prepare. As someone who does outward facing voluntary community building at my workplace and...

This was a fantastic post Gemma, it really resonated with me and I honestly think it's one of the best things I've read on here this year :)

Some points that spoke to me, along with reflections from my own experience:

I defer a lot less when making career decisions and thinking about cause prioritisation. I’m still not great at it but I’m much less likely to assume something is true just because someone I respected said it. This is probably a good thing - better late than never

Agreed that this is a good thing. I think that coming to the EA community/movement... (read more)

tl;dr: Contribute to aisafety.info by answering questions about AI Safety from October 6th to October 9th. Participation in hackathons is the basis for applying to future fellowships, and there are prizes to be won by the top entrants. Register here and see the participant guide here.

What is the schedule for the event?

The event will run from Friday October 6th, 7am UTC, to Monday October 9th 2023, 7am UTC. See here for the full schedule. You are invited to participate throughout whichever parts of those days fit your schedule. 

What is the format of the event?

Participants will choose questions to answer for aisafety.info, and work on these answers in google docs. Collaboration on the event will take place on Discord as well as on gather.town. I’ll be online for most of those three days to...

The schedule looks like it's all dated for August, is that the right link?

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Introduction

In March 2023, we launched the Open Philanthropy AI Worldviews Contest. The goal of the contest was to surface novel considerations that could affect our views on the timeline to transformative AI and the level of catastrophic risk that transformative AI systems could pose. We received 135 submissions. Today we are excited to share the winners of the contest.

But first: We continue to be interested in challenges to the worldview that informs our AI-related grantmaking. To that end, we are awarding a separate $75,000 prize to the Forecasting Research Institute (FRI) for their recently published writeup of the 2022 Existential Risk Persuasion Tournament (XPT).[1] This award falls outside the confines of the AI Worldviews Contest, but the recognition is motivated by the same principles that motivated the contest. We believe that the results...

Zvi - FWIW, your refutation of the winning essay on AI, interest rates, and the efficient market hypothesis (EMH) seemed very compelling, and I'm surprised that essay was taken seriously by the judges.

Global capital markets don't even seem to have any idea how to value crypto protocols that might be moderately disruptive to fiat currencies and traditional finance institutions. Some traders think about these assets (or securities, or commodities, or whatever the SEC thinks they are, this week), but most don't pay any attention to them. And even if most trad... (read more)

4
Geoffrey Miller
2h
Jason - thanks for the news about the winning essays. If appropriate, I would appreciate any reactions the judges had to my essay about a moral backlash against the AI industry slowing progress towards AGI. I'm working on refining the argument, so any feedback would be useful (even if only communicated privately, e.g. by email).  
14
Ted Sanders
2h
"Refuted" feels overly strong to me. The essay says that market participants don't think TAGI is coming, and those market participants have strong financial incentive to be correct, which feels unambiguously correct to me. So either TAGI isn't coming soon, or else a lot of people with a lot of money on the line are wrong. They might well be wrong, but their stance is certainly some form of evidence, and evidence in the direction of no TAGI. Certainly the evidence isn't bulletproof, condsidering the recent mispricings of NVIDIA and other semi stocks. In my own essay, I elaborated on the same point using prices set by more-informed insiders: e.g., valuations and hiring by Anthropic/DeepMind/etc., which also seem to imply that TAGI isn't coming soon. If they have a 10% chance of capturing 10% of the value for 10 years of doubling the world economy, that's like $10T. And yet investment expenditures and hiring and valuations are nowhere near that scale. The fact that Google has more people working on ads than TAGI implies that they think TAGI is far off. (Or, more accurately, that marginal investments would not accelerate TAGI timelines or market share.)

Introduction

The EA Mexico Residency Fellowship marked a significant milestone in bringing together individuals committed to effective altruism (EA) worldwide, focusing on Spanish speakers and individuals from underrepresented backgrounds.  This post serves as an overview of the program's outcomes and areas for improvement. By sharing our experiences, we aim to provide valuable insights for future organizers of similar initiatives.


Quick Facts

  • 102 participants from 25 countries, comprising 11 fellows and 91 visitors. 36% of the fellows identified themselves as female or non-binary. 32% of the fellows were from LATAM.
  • Generated 714 new connections at an average cost of 371 USD per connection.
  • Weekly cost for hosting a participant was approximately 602 USD
  • 42.5% of participants rated the fellowship 10/10.
  • 30% of respondents valued the fellowship at over $10,000.
  • 40% of participants would consider relocating to Mexico City for an established
...

FWIW: a focus that has helped me is working out whether I think it's worth continuing our program and whether it's been a success is what concrete actions resulted from the connections. Do you have a sense of any resulting actions?   

16
Yelnats T.J.
6h
I think it's very misleading to equate "would recommend the program to a friend" (>40% 10 out of 10) to >40% were 10 out of 10 satisfied with the program. These are two different things. When I did the survey, I gave it a high mark because which of my friends wouldn't love to spend a month in CDMX with air fare and rent covered. However, this does not mean that my satisfaction was high marks. I would have given it a lower score for satisfaction because the program seemed unnecessarily costly by locating both a co-living space and co-working space in one of the most upscale districts in CDMX. Also, when I applied to the program, it was billed as a way to help the Mexico EA community and launch CDMX as an EA hub. The program was majority non-Mexican EAs and many of us had little interactions with the Mexican EAs. Some of this was on the participants (including myself) for not being more proactive in finding out who the Mexican EAs were, but the program should have been more hands on in fostering this. Also the observations about food seem to miss the biggest critique I heard from participants, which was that about half of the co-working-space catered meals sponsored by the residency had meat, people didn't hear a rationale as to why given that EA conferences are all plant-based. In general, my sense from my time there was that participants felt there was more room for improvement than this post indicates (this my personal perception, so other participants can chime on if they agree or disagree). I think it would be even more helpful to do an assessment of the EA Mexico community, EA LatAm community, and prospects of CDMX becoming hub 8 months after the fact to see if the residency materially changes those things. I think EA residencies/fellowships should be subjected to a cost-effectiveness standard of if they would be better for community building than just spending all that money on community builders. In the case of the CDMX residency that was 330,000 USD, wh
10
Sandra Malagon
5h
Thank you for bringing up the issue with the survey data. I'll review it to determine if it's a translation or interpretation problem. Regarding your criticisms of the program, I'm surprised because much of this information was shared with participants during the program, and some of it is also explained in this same post. The reason for canceling all planned community activities in CDMX was due to the cancellation of all effective altruism outreach activities in much of the world because of the FTX scandal. This was completely beyond the team's control, and it's explained in the post. Since the outreach activities were canceled, we opened the option to conduct activities within the fellowship to improve them, and we had several volunteers. However, the program's objective was never solely focused on the Mexican community. It was designed to strengthen the relationship between the Latin American and international communities. This information was provided in the application, and fortunately, this did happen with valuable connections, especially for individuals from mid and low-income countries who couldn't easily make these connections elsewhere. Regarding vegan food, it was also explained during the program that some participants requested other options, and we accommodated them because we didn't want to force a specific diet on people who didn't prefer it. I want to emphasize that there were always vegan options available for those who preferred them.  As for the improvements, I agree that communication from the organizing team could have been better, but I believe it's also the responsibility of participants to read the provided information or ask the organizers for clarification. An yes, it was an expensive program, and we made the cost public to evaluate the cost-effectiveness of similar initiatives in the future and to compare with them. However, comparing it to using the money to pay for full-time jobs doesn't seem fair. At the time we received the grant

Confidence level: I’m a computational physicist working on nanoscale simulations, so I have some understanding of most of the things discussed here, but I am not specifically an expert on the topics covered, so I can’t promise perfect accuracy.

I want to give a huge thanks to Professor Phillip Moriarty of the university of Nottingham for answering my questions about the experimental side of mechanosynthesis research.

Introduction:

A lot of people are highly concerned that a malevolent AI or insane human will, in the near future, set out to destroy humanity. If such an entity wanted to be absolutely sure they would succeed, what method would they use? Nuclear war? Pandemics?

According to some in the x-risk community, the answer is this: The AI will invent molecular nanotechnology, and then kill...

Fair point, and I rephrased to be more clear on what I meant to say--that the scenario here is mostly science fiction (it's not as if GPT5 is turned on, diamondoid bacteria appear out of nowhere, and we all drop dead). 

1
Muireall
11h
I see, thanks! Section 8.2, "Gray Dust":
2
titotal
12h
I think you've entirely missed my actual complaint here. There would have been nothing wrong with inventing a new term and using it to describe a wide class of structures. The problem is that the term already existed, and already had an accepted scientific definition since the 1960's (adamantane family materials). If a term already has an accepted jargon definition in a scientific field, using the same term to mean something else is just sloppy and confusing. 
2
Raemon
4h
Is your concrete suggestion/ask "get rid of the karma requirement?"

Hmmm I'm not being as prescriptive as that. Maybe there is a better solution to this specific problem - maybe requiring someone with higher karma to confirm the suggestion? (original person gets the credit)

2
Gemma Paterson
15h
See also the Payroll Giving (UK) or GAYE - EA Forum (effectivealtruism.org) page which it is the top google result for "Effective Altruism Payroll Giving". It made sense for me to update since I am an accountant and have experience trying to get this done at my workplace.  Did I need to make a post about something unrelated to do that? 

Summary

Shrimp Welfare Project launched in Sep 2021, via the Charity Entrepreneurship Incubation Program. We aim to reduce the suffering of billions of farmed shrimps. This post summarises our work to date, what we plan to work on going forward, and clarifies areas where we’re not focusing our attention. This post was written to coincide with the launch of our new (Shr)Impact page on our website.

We have four broad workstreams: corporate engagement, farmer support, research, and raising issue salience. We believe our key achievements to date are:

...

Got it, thanks for the response!! Really appreciate it :)

On shrimp sizes:

Ah, I missed that you were inferring number of individuals affected based on production tonnage. It sounds like 14g is your estimate for the size of an individual 'headless peeled shrimp's?

If so: I can't quite tell whether all electrically stunned shrimp end up being counted as "production", or if e.g. some are not in good enough condition to be used in production. If the latter is true (if a big portion of electrically stunned shrimp do not end up in production), could you be underco... (read more)

Introduction

This post seeks to estimate how much we should expect a highly cost-effective charity to spend on reducing existential risk by a certain amount. By setting a threshold for cost-effectiveness, we can be selective about which longtermist charities to recommend to donors.

We appreciate feedback. We would like for this post to be the first in a sequence about cost-effectiveness thresholds for giving, and your feedback will help us write better posts.

How many beings does extinction destroy?

This chart gives six estimates for the size of the moral universe that would be lost in an extinction event on Earth this century. There is a truly incredible range in the possible size of the moral universe, and the value you see in the future depends on the moral weights you...

The informality of that equation makes it hard for me to know how to reason about it. For eg, 

  • T, D and F seem heavily dependent.
  • I'm just not sure how to parse 'computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets'. What does it mean for a year of sentient life to be spent simulating something? Do you think he means what fraction of experienced years exist in ancestor simulations? I'm still confused by this after reading the last paragraph.
  • I'm not sure what the expression's value represents. Are we suppose
... (read more)
4
MichaelStJules
6h
The possibility of us living in a short-lived simulation isn't enough to count much against longtermism, because it's also possible we could live in a long-lived simulation or a long-lived world, and those possibilities will be much higher stakes, so still dominate expected value calculations unless we assign them tiny probability together. I think the argument crucially depends on the assumption that simulations will be disproportionately short-lived, and we have acausal influence over agents in other simulations. If for each long-running world (simulated or otherwise) with moral agents and moral patients, there are N short-lived worlds with (moral) agents and moral patients, and our actions are correlated with those of agents across worlds, then we get to decide for more agents in in short-lived worlds than long-lived ones. Basically, acausal influence will boost the expected value of all interventions, but if moral patients are disproportionately in short-lived simulations with agents whose decisions we're correlated with relative to long-run simulations with agents whose decisions we're correlated with (or more skewed towards the short-lived than it seems for our own world), acausal influence will disproportionately boost the expected value of neartermist interventions relative to longtermist ones. Also, ~all of the expected value will be acausal if we fully count the value of acausal influence, based on the evidentialist's wager and similar, given the possibility of very large or even infinite numbers of agents with whom we're correlated.
2
Vasco Grilo
5h
Thanks for clarifying, Michael! Yes, the argument depends on Brian's parameter F not being super small. F is "fraction of all computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets (both the most intelligent and also less intelligent creatures on those planets)". "A non-solipsish simulation is one in which most or all of the people and animals who seem to exist on Earth are actually being simulated to a non-trivial level of detail". Brian guessed F = 10^-6, but it feels like it should be much smaller to me. If the value of the future is e.g. 10^30 times the value of this century, it is maybe reasonable to assume that the vast vast majority of computational sent-years are also simulations of the far future, as opposed to simulations of almost-space-colonizing ancestral planets.