New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post  mentioned a lot of people and organizations, so it seemed like useful data. I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic.  This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out. Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive. It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k's mention was negative. I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I'm very glad I solicited comments, and found the process easier than expected. 
The Animal Welfare Department at Rethink Priorities is recruiting volunteer researchers to support on a high-impact project! We’re conducting a review on interventions to reduce meat consumption, and we’re seeking help checking whether academic studies meet our eligibility criteria. This will involve reviewing the full text of studies, especially methodology sections. We’re interested in volunteers who have some experience reading empirical academic literature, especially postgraduates. The role is an unpaid volunteer opportunity. We expect this to be a ten week project, requiring approximately five hours per week. But your time commitment can be flexible, depending on your availability. This is an exciting opportunity for graduate students and early career researchers to gain research experience, learn about an interesting topic, and directly participate in an impactful project. The Animal Welfare Department will provide support and, if desired, letters of experience for volunteers. If you are interested in volunteering with us, contact Ben Stevenson at bstevenson@rethinkpriorities.org. Please share either your CV, or a short statement (~4 sentences) about your experience engaging with empirical academic literature. Candidates will be invited to complete a skills assessment. We are accepting applications on a rolling basis, and will update this listing when we are no longer accepting applications. Please reach out to Ben if you have any questions. If you know anybody who might be interested, please forward this opportunity to them!
I highly recommend the book "How to Launch A High-Impact Nonprofit" to everyone. I've been EtG for many years and I thought this book wasn't relevant to me, but I'm learning a lot and I'm really enjoying it.
2
Emrik
2d
1
If evolutionary biology metaphors for social epistemology is your cup of tea, you may find this discussion I had with ChatGPT interesting. 🍵 (Also, sorry for not optimizing this; but I rarely find time to write anything publishable, so I thought just sharing as-is was better than not sharing at all. I recommend the footnotes btw!) Glossary/metaphors * Howea palm trees ↦ EA community * Wind-pollination ↦ "panmictic communication" * Sympatric speciation ↦ horizontal segmentation * Ecological niches ↦ "epistemic niches" * Inbreeding depression ↦ echo chambers * Outbreeding depression (and Baker's law) ↦ "Zollman-like effects" * At least sorta. There's a host of mechanisms mostly sharing the same domain and effects with the more precisely-defined Zollman effect, and I'm saying "Zollman-like" to refer to the group of them. Probably I should find a better word. Background Once upon a time, the common ancestor of the palm trees Howea forsteriana and Howea belmoreana on Howe Island would pollinate each other more or less uniformly during each flowering cycle. This was "panmictic" because everybody was equally likely to mix with anybody else. Then, on a beautifwl sunny morning smack in the middle of New Zealand and Australia, the counterfactual descendants had had enough. Due to varying soil profiles on the island, they all had to compromise between fitness for each soil type—or purely specialize in one and accept the loss of all seeds which landed on the wrong soil. "This seems inefficient," one of them observed. A few of them nodded in agreement and conspired to gradually desynchronize their flowering intervals from their conspecifics, so that they would primarily pollinate each other rather than having to uniformly mix with everybody. They had created a cline. And a cline once established, permits the gene pools of the assortatively-pollinating palms to further specialize toward different mesa-niches within their original meta-niche. Given that a crossbreed between palms adapted for different soil types is going to be less adaptive for either niche,[1] you have a positive feedback cycle where they increasingly desynchronize (to minimize crossbreeding) and increasingly specialize. Solve for the general equilibrium and you get sympatric speciation.[2] Notice that their freedom to specialize toward their respective mesa-niches is proportional to their reproductive isolation (or inversely proportional to the gene flow between them). The more panmictic they are, the more selection-pressure there is on them to retain 1) genetic performance across the population-weighted distribution of all the mesa-niches in the environment, and 2) cross-compatibility with the entire population (since you can't choose your mates if you're a wind-pollinating palm tree).[3] From evo bio to socioepistemology > I love this as a metaphor for social epistemology, and the potential detrimental effects of "panmictic communication". Sorta related to the Zollman effect, but more general. If you have an epistemic community that are trying to grow knowledge about a range of different "epistemic niches", then widespread pollination (communication) is obviously good because it protects against e.g. inbreeding depression of local subgroups (e.g. echo chambers, groupthink, etc.), and because researchers can coordinate to avoid redundant work, and because ideas tend to inspire other ideas; but it can also be detrimental because researchers who try to keep up with the ideas and technical jargon being developed across the community (especially related to everything that becomes a "hot topic") will have less time and relative curiosity to specialize in their focus area ("outbreeding depression"). > > A particularly good example of this is the effective altruism community. Given that they aspire to prioritize between all the world's problems, and due to the very high-dimensional search space generalized altruism implies, and due to how tight-knit the community's discussion fora are (the EA forum, LessWrong, EAGs, etc.), they tend to learn an extremely wide range of topics. I think this is awesome, and usually produces better results than narrow academic fields, but nonetheless there's a tradeoff here. > > The rather untargeted gene-flow implied by wind-pollination is a good match to mostly-online meme-flow of the EA community. You might think that EAs will adequately speciate and evolve toward subniches due to the intractability of keeping up with everything, and indeed there are many subcommunities that branch into different focus areas. But if you take cognitive biases into account, and the constant desire people have to be *relevant* to the largest audience they can find (preferential attachment wrt hot topics), plus fear-of-missing-out, and fear of being "caught unaware" of some newly-developed jargon (causing people to spend time learning everything that risks being mentioned in live conversations[4]), it's unlikely that they couldn't benefit from smarter and more fractal ways to specialize their niches. Part of that may involve more "horizontally segmented" communication. Tagging @Holly_Elmore because evobio metaphors is definitely your cup of tea, and a lot of it is inspired by stuff I first learned from you. Thanks! : ) 1. ^ Think of it like... if you're programming something based on the assumption that it will run on Linux xor Windows, it's gonna be much easier to reach a given level of quality compared to if you require it to be cross-compatible. 2. ^ Sympatric speciation is rare because the pressure to be compatible with your conspecifics is usually quite high (Allee effects ↦ network effects). But it is still possible once selection-pressures from "disruptive selection" exceed the "heritage threshold" relative to each mesa-niche.[5] 3. ^ This homegenification of evolutionary selection-pressures is akin to markets converging to an equilibrium price. It too depends on panmixia of customers and sellers for a given product. If customers are able to buy from anybody anywhere, differential pricing (i.e. trying to sell your product at above or below equilibrium price for a subgroup of customers) becomes impossible. 4. ^ This is also known (by me and at least one other person...) as the "jabber loop": > This highlight the utter absurdity of being afraid of having our ignorance exposed, and going 'round judging each other for what we don't know. If we all worry overmuch about what we don't know, we'll all get stuck reading and talking about stuff in the Jabber loop. The more of our collective time we give to the Jabber loop, the more unusual it will be to be ignorant of what's in there, which means the social punishments for Jabber-ignorance will get even harsher. 5. ^ To take this up a notch: sympatric speciation occurs when a cline in the population extends across a separatrix (red) in the dynamic landscape, and the attractors (blue) on each side overpower the cohering forces from Allee effects (orange). This is the doodle I drew on a post-it note to illustrate that pattern in different context: I dub him the mascot of bullshit-math. Isn't he pretty?

Popular comments

Recent discussion

Listen to the full podcast

Helen Toner: "For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases, outright lying to the board.

At this point, everyone...

Continue reading

He said this during that initial Senate hearing iirc, and I think he was saying frequently around then (I recall a few other instances but don't remember where).

2
Ulrik Horn
It would be hard to imagine he has no interest, I would say even a simple bonus scheme whether stock, options, cash, etc. would count as "interest". If company makes money then so does he.
4
Ulrik Horn
I think what would be more helpful for me is the other things discussed in board meetings. Even if not expected to be a big deal, if they were (hyperbolic example) for example discussing whether to have a coffee machine at the office, I think not mentioning ChatGPT would be striking. On the other hand, if they only met once a year and only discussed e.g. if they are financially viable or not, then perhaps not mentioning ChatGPT makes more sense. And maybe even this is not enough - it would also be concerning if some board members wanted more info, but did not get it. If a board member requested more info on prod dev and then ChatGPT was not mentioned, this would also look bad. I think the context and the particulars of this particular board is important.

Epistemic Status: somewhat confident: I may have made coding mistakes. R code is here if you feel like checking.

Introduction: 

In their 2022 article, Matthew Killingsworth and Daniel Kahneman looked to reconcile the results from two of their papers. Kahneman (2010) ...

Continue reading

Seconding. This is one of my favourite kinds of post on the EA forum (and well done on keeping it relatively short!)

One quibble - why present the R code in a Google doc, rather than on a Github repo?

5
Ozzie Gooen
Some quick point: 1. Thanks for doing this replication! I find the data pretty interesting. 2. I think my main finding here is that the "giving money to those who are the least happy, conditional on being poor" seems much more effective than giving to those who are more happy. Or, the 15 percentile slopes are far higher than the other slopes, below 50k, and this seems more likely to be statistically significant than other outcomes. I'm really curious why this is. The effect here seems much larger than I would have imagined. Maybe something is going on like, "These very unhappy poor people had expectations of having more money, so they are both particularly miserable, and money is particularly useful to them." In theory there could be policy proposals here, but they do seem tricky. A naive one would be, "give money first to the poorest and saddest," but I'm sure you can do better.   3. From quickly looking at these graphs, I'm skeptical of what you can really takeaway after the 50k pound marks. There seems to be a lot of randomness here, and the 50k threshhold seems arbitrary. I'd also flag that it seems weird to me to extend the red lines so far to the left, when there are so few data points at less than ~3k. I'm very paranoid about outliers here.  4. Instead of doing a simple linear interpolation, split into two sections, I think I'd be excited about other statistical processes you could do. Maybe this could be modeled as a guassian process, or estimated using bayesian techniques. (I realize this could be much more work though).
1
SummaryBot
Executive summary: Replicating Killingsworth & Kahneman's 2022 findings using UK health survey data, the author finds similar results suggesting no absolute "happiness ceiling" for income, though extra income has diminishing returns for happiness in an unhappy minority. Key points: 1. The author replicates Killingsworth & Kahneman's (KK) 2022 findings using 2012 UK health survey data with a different well-being measure. 2. For the median and majority of the population (50th, 70th, 85th percentiles), happiness continues to increase with log income above a high threshold (£50,000 in 2012). 3. For an unhappy minority (5th to 35th percentiles), extra income has no association with happiness above the £50,000 threshold. 4. The replication is notable given differences in the dataset and provides a small update in favor of KK's findings against a "happiness ceiling". 5. The findings suggest continued economic growth may increase happiness for most but also increase happiness inequality.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

TLDR:  To guide our research on new interventions for the animal advocacy movement, we need a framework that allows us to quantify the subjective experiences of animals. For example, if we were comparing two campaigns—say, a) phasing out fast-growing breeds or...

Continue reading

Thanks for your feedback and thoughts :)

Re your questions 1 and 2 - Yep I definitely agree that there are better approaches to moral uncertainty. I indeed chose mine for illustrative purposes, as you point out. Moreover, in our application of this framework, the end-line result of "value weighted by framework" just isn't that important to our decision-making - it's a small piece of information within a framework that we don't weight that strongly. For me, the useful information that arises from the moral uncertainty step is seeing whether particular interv... (read more)

2
Ren Ryba
  I have no plans to do so. Generally speaking, I have a preference for users to produce their own spreadsheets, so they can be much more deliberate and conscious about their choices, model details, values for inputs/parameters, etc. This is especially the case for a framework like this, which is naturally speculative and rudimentary.
2
Ren Ryba
Excellent, thanks. I'd advise any readers to throw my pleasure categories in the dustbin and use those instead. (It's a case in point for my caution that "In fact, this article was written in early 2023 and posted in early 2024, so there might be important, recent developments that are not included in this article."!)
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Crossposted from AI Lab Watch. Subscribe on Substack.

Introduction

Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board. Anthropic sometimes emphasizes that the Trust is an experiment, but mostly points to...

Continue reading

I claim that public information is very consistent with the investors hold an axe over the Trust; maybe the Trust will cause the Board to be slightly better or the investors will abrogate the Trust or the Trustees will loudly resign at some point; regardless, the Trust is very subordinate to the investors and won't be able to do much.

And if so, I think it's reasonable to describe the Trust as "maybe powerless."

3
Ebenezer Dukakis
It seems valuable to differentiate between "ineffective by design" and "ineffective in practice". Which do you think is more the cause for the trend you're observing? OP is concerned that Anthropic's governance might fall into the "ineffective by design" category. Like, it's predictable in advance that something could maybe go wrong here. If yours is more of an "ineffective in practice" argument -- that seems especially concerning, if the "ineffective in practice" point applies even when the governance appeared to be effective by design, ex ante. ---------------------------------------- In any case, I'd really like to see dedicated efforts to argue for ideal AI governance structures and documents. It feels like EA has overweighted the policy side of AI governance and underweighted the organizational founding documents side. Right now we're in the peanut gallery, criticizing how things are going at OpenAI and now Anthropic, without offering much in the way of specific alternatives. Events at OpenAI have shown that this issue deserves a lot more attention, in my opinion. Some ideas: * A big cash prize for best AI lab governance structure proposals. (In practice you'd probably want to pick and choose the best ideas across multiple proposals.) * Subsidize red-teaming novel proposals and testing out novel proposals in lower-stakes situations, for non-AI organiations. (All else equal, it seems better for AGI to be developed using an institutional template that's battle-tested.) We could dogfood proposals by using them for non-AI EA startups or EA organizations focused on e.g. community-building. * Governance lit reviews to gather and summarize info, both empirical info and also theoretical models from e.g. economics. Cross-national comparisons might be especially fruitful if we don't think the right structures are battle-tested in a US legal context. At this point, I'm embarrassed that if someone asked me how to fix OpenAI's governance docs, I wouldn't really
4
Habryka
I think people should definitely consider and assign non-trivial probability to the LTBT being powerless (probably >10%), which feels like the primary point of the post. Do you disagree with that assessment of probabilities (if so, I would probably be open to bets).

We are excited to announce the EA Nigeria Summit, which will take place on September 6th and 7th, 2024, in Abuja, Nigeria. 

The two-night event aims to bring together individuals thinking carefully about some of the world's biggest problems, taking impactful action ...

Continue reading

Would definitely love to be here

1
Jide
How did I miss this??
1
Adebayo Mubarak
Yet to happen... The timeline is September and the application is still open. 

In this post, I'll discuss my current understanding of SB-1047, what I think should change about the bill, and what I think about the bill overall (with and without my suggested changes).

Overall, SB-1047 seems pretty good and reasonable. However, I think my suggested changes could substantially improve the bill and there are some key unknowns about how implementation of the bill will go in practice.

The opinions expressed in this post are my own and do not express the views or opinions of my employer.

[This post is the product of about 4 hours of work of reading the bill, writing this post, and editing it. So, I might be missing some stuff.]

[Thanks to various people for commenting.]

My current understanding

(My understanding is based on a combination of reading the bill, reading various summaries of the bill, and getting pushback from commenters.)

The bill places requirements on "covered...

Continue reading

This podcast from the Flourishing Minds fund gives a nice overview of effective mental health organizations. Hopefully, they'll keep posting!

Continue reading

Here’s another one worth a listen: Changing the Game: StrongMinds' Mission to Improve Mental Health Globally on The Giving What We Can Podcast

https://overcast.fm/+y62ecrUYI

I am writing this post in response to a question that was raised by Nick a few days ago,

1) as to whether the white sorghum and cassava that our project aims to process will be used in making alcohol, 2) whether the increase in production of white sorghum and cassava...

Continue reading
1
roddy
Thanks, that's very useful. I've been looking into this some more, and I have some more questions: 1. About the comparison with sugarcane: from the section of your page (https://www.ugandafarm.org/size-of-long-island/) and the links, I see that sugar cane prices did fall to Ugx 1,000-3,000 per ton at one point in 2021, but the usual range seems to be more like Ugx 60,000-200,000 per ton. With yield of around 40 tons/acre, even using the lowest price Ugx 60,000/ton would give an annual revenue of Ugx 2.4m/acre. For your estimates for sorghum of Ugx 1,300/kg and 700kg/acre with 2 harvests/year, that would only be Ugx 1.82m/acre. I appreciate that avoiding the risks of a sugarcane monoculture is valuable separate from increasing the average income. But I'm still surprised that the average annual income with sorghum could be lower than with sugarcane. Is there some thing I'm missing about the price of sugarcane or the costs of growing it? 2. About the benefits of the facility versus the current setup of growing sorghum without a plant: do you have an estimate for how much sorghum Uganda Breweries and other companies would be willing to buy in the current setup? That is, how many farmers/acres could you support without the plant? 3. A separate question: could you give me an idea of what proportion of farmers in Busoga grow sugarcane versus being subsistence farmers?

Thanks so much too, Roddy. Here are the answers to these questions:

1). Sugarcane takes two years to mature, and is harvested once in a 24-month cycle. During the same period, a sorghum farmer harvests four times (twice every year).

The other thing with sugarcane is that it is almost exclusively bought by only one category of buyers (sugarcane millers). As a result, whenever it drops to the levels of Ugx 60,000 per ton, it isn't just that farmers suffer a low price. It is that they also often have no buyers (due to oversupply), which is why they then resort ... (read more)

This is a quickly-written opinion piece, of what I understand about OpenAI. I first posted it to Facebook, where it had some discussion

 

Some arguments that OpenAI is making, simultaneously:

  1. OpenAI will likely reach and own transformative AI (useful for attracting
...
Continue reading

This interview seems very relevant.

https://twitter.com/JMannhart/status/1795652390563225782

"But more to that, like, no one person should be trusted here. I don't have super voting shares. Like I don't want them. The board can fire me. I think that's important. I think the board over time needs to get like democratized to all of humanity.There's many ways that could be implemented.

But the reason for our structure and the reason it's so weird and one of the consequences of that weirdness was me ending up with no equity, is we think this technology, the benef... (read more)