Owen Cotton-Barratt

Topic Contributions


2-factor voting (karma, agreement) for EA forum?

Yeah to proxy this maybe I'd imagine something like adding a virtual five upvotes and five downvotes to each comment to start it near 50%, so it's a strong signal if you see something with an extreme value.

Maybe that's a bad idea; makes it harder (you'd need to hover) to notice when something's controversial.

2-factor voting (karma, agreement) for EA forum?

I think it would be better if the agreement was expressed as a percentage rather than a score, to make it feel more distinct // easier to remember what the two were.

Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins

(I'm a conscientious objector to LinkedIn. I think the business practices of requiring you to have an account to see other people's accounts, and of showing people who pay who's looked at their page, are super obnoxious.)

Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins

I expect people will vary on this. Maybe most people who would be happy filling in the form at all won't mind much about google drive link-sharing. (I imagine a little more nervousness b/c it's easier for people to share a link to their CV than share e.g. a pdf of their CV)

Of possible interest: 2 minutes reflection from me says that I probably won't get to filling this in b/c "writing a CV" is something I will naturally feel perfectionist about // probably I'd need to spend 1-3 days on it to feel comfortable with it going to this group, and I probably don't want to spend that time (if someone made a bid that something was really important I could imagine myself pushing through the discomfort and doing something faster, but I'm more interested in myself as a stand-in for other people with the same hangups than literally getting a submission from me). If instead of asking for a CV you just had a series of questions about career that I could fill in on the form, I'd be decently likely to spend 20-30 minutes doing that. The key difference is that if I'm doing it for a form there's no social expectation that it's the kind of thing that people put time into polishing, so I don't feel bad about doing a quick rather than perfectionist version.

On Deference and Yudkowsky's AI Risk Estimates

I really appreciated this update. Mostly it checks out to me, but I wanted to push back on this:

Here’s a dumb thought experiment: Suppose that Yudkowsky wrote all of the same things, but never published them. But suppose, also, that a freak magnetic storm ended up implanting all of the same ideas in his would-be-readers’ brains. Would this absence of a casual effect count against deferring to Yudkowsky? I don’t think so. The only thing that ultimately matters, I think, is his track record of beliefs - and the evidence we currently have about how accurate or justified those beliefs were.

It seems to me that a good part of the beliefs I care about assessing are the beliefs about what is important. When someone has a track record of doing things with big positive impact, that's some real evidence that they have truth-tracking beliefs about what's important. In the hypothetical where Yudkowsky never published his work, I don't get the update that he thought these were important things to publish, so he doesn't get credit for being right about that.

Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins

For the group who have a CV but just don't want it publicly visible, maybe you should have a way of submitting that information that isn't giving a public link?

Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins

I feel maybe you should say something like "this will be quick if you have an up-to-date LinkedIn or online CV"? (I don't; I guess I'm unusual but not super-unusual among the population who would otherwise be happy filling this in. People might either not have got to updating a CV recently, or not be happy having one publicly available.)

Impact markets may incentivize predictably net-negative projects

Nice, that's pretty interesting. (It's hacky, but that seems okay.)

It's easy to see how this works in cases where there's a single known-in-advance funder that people are aiming to get retro funding from (evaluated in five years, say). Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?

Impact markets may incentivize predictably net-negative projects

Finally: on a meta level, the amount of risk you're willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment.

I think this is not quite right. It shouldn't be about what we think about existing funding mechanisms, but what we think about the course we're set to be on. I think that ~EA is doing quite a good job of reshaping the funding landscape especially for the highest-priority areas. I certainly think it could be doing better still, and I'm in favour of experiments I expect to see there, but I think that spinning up impact markets right now is more likely to crowd out later better-understood versions than to help them.

Impact markets may incentivize predictably net-negative projects

Additionally - I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?

I didn't follow this; could you elaborate? (/give an example?)

Load More