RyanCarey

Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai

Wiki Contributions

Comments

The importance of optimizing the first few weeks of uni for EA groups

It happens in Australian universities. Probably anywhere there's a large centralised campus. Wouldn't work as well in Oxbridge, though, because the teaching areas, and even the libraries, are spread all across the city.

Guarding Against Pandemics

Important topic. Though I find it hard to gauge the project without certain basic info:

  1. In what ways is this actually a non-partisan effort (when the funding is going through ActBlue)?
  2. How are you managing any risks, not limited to polarising EA politics, poisoning political relationships?
  3. To what extent has the project been vetted by funders and experts working in adjacent areas?
The motivated reasoning critique of effective altruism

It might be orthogonal to the point you're making, but do we have much reason to think that the problem with old-CFAR was the content? Or that new-CFAR is effective?

Digital person

Yeah, I haven't analysed Holden's intended meaning whatsoever, but something like what you describe would make much more sense.

Digital person

It can't be right to say that every descendant of a digital person is by definition also a person. A digital person could spawn (by programming, or by any other means) a bot that plays RPS randomly, in one line of code. Clearly not a person!

It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

What about the hypothesis that simple animal brains haven't been simulated because they're hard to scan - we lack a functional map of the neurons - which ones promote or inhibit one another, and other such relations.

Is effective altruism growing? An update on the stock of funding vs. people

Agree that we shouldn't expect large productivity/wellbeing changes. Perhaps a ~0.1SD improvement in wellbeing, and a single-digit improvement in productivity - small relative to effects on recruitment and retention.

I agree that it's been good overall for EA to appear extremely charitable. It's also had costs though: it sometimes encouraged self-neglect, portrayed EA as 'holier than thou', EA orgs as less productive, and EA roles as worse career moves than the private sector. Over time, as the movement has aged, professionalised, and solidified its funding base, it's been beneficial to de-emphasise sacrifice, in order to place more emphasis on effectiveness. It better reflects what we're currently doing, who we want to recruit, too. So long as we take care to project an image that is coherent, and not hypocritical, I don't see a problem with accelerating the pivot. My hunch is that even apart from salaries, it would be good, and I'd be surprised if it was bad enough to be decisive for salaries.

Is effective altruism growing? An update on the stock of funding vs. people

This kind of ambivalent view of salary-increases is quite mainstream within EA, but as far as I can tell, a more optimistic view is warranted.

If 90% of engaged EAs were wholly unmotivated by money in the range of $50k-200k/yr, you'd expect >90% of EA software engineers, industry researchers, and consultants to be giving >50%, but much fewer do. You'd expect EAs to be nearly indifferent toward pay in job choice, but they're not. You'd expect that when you increase EAs' salaries, they'd just donate a large portion on to great tax-deductible charities, so >75% of the salary increase would be refunded on to other effective orgs. But when you say that the spending would be only a tenth as effective (rather than ~four-tenths), clearly you don't.

Although some EAs are insensitive to money in this way, 90% seems too high. Rather, with doubled pay, I think you'd see some quality improvements from an increased applicant pool, and some improved workforce size (>10%) and retention. Some would buy themselves some productivity and happiness. And yes, some would donate. I don't think you'd draw too many hard-to-detect "fake EAs" - we haven't seen many so far. Rather, it seems more likely to help quality than hurt on the margin.

I don't think the PR risk is so huge at <$250k/yr levels. Closest thing I can think of is commentary regarding folks at OpenAI, but it's a bigger target, with higher pay. If the message gets out that EA employees are not bound to a vow of poverty, and are actually compensated for >10% of the good they're doing, I'd argue that's would enlarge and improve the recruitment pool on the margin.

(NB. As an EA worker, I'd stand to gain from increased salaries, as would many in this conversation. Although not for the next few years at least given the policies of my current (university) employer.)

In favor of more anthropics research

I think they believe in Wei Dai's UDT, or some variant of it, which is very close to Stuart's anthropic decision theory, but you'd have to ask them which, if any, published or unpublished version they find most convincing.

Load More