All of Coafos's Comments + Replies

the additional burden of seeking out the lendor for return of the item

That could be a plus. If you're running a local group, and lend books at some public event, (like tabling), then this will incentivise the takers to attend the next local EA event too, where they can bring the books back.

Teach a man to fish, they'll still starve in the jungle.

Note: I tried to do it on mobile, and it's not working everywhere? I tried to tap on post karma or question answer karma but it did not show total vote count.

(On my laptop it works.)

3
Sarah Cheng
1y
Yeah, the forum relies a lot on hover effects, which don't work very well on mobile. To avoid that in this case seems like it would overcomplicate the UI though, so I'm not sure what an improved UX would look like. I'll add this to our backlog for triage.
Answer by CoafosJan 19, 202320
15
6

Cap the number of strong votes per week.

Strong votes with large weights have their uses in uncommon situations. But these situations are uncommon, so instead of weakening strong votes, make them rarer.

The guideline says use them only in exceptional cases, but there is no mechanism enforcing it: socially, strong votes are anonymous and look like standard votes; and technically, any number of them could be used. They could make a comment section appear very one-sided, but with rarity, some ideas can be lifted/hidden, and the rest of the section can be more d... (read more)

1
Max Clarke
1y
One situation I use strong votes for is whenever I do "upvote/disagree" or "downvote/agree". I do this to offset others who tend not to split their votes.
2
Max Clarke
1y
I think some kind of "strong vote income", perhaps just a daily limit as you say, would work.
2
Nathan Young
1y
I will sort of admit to not being that responsible. I probably use a couple of strong votes a blog - when I think something is really underrated usually.  I guess I might be more sparing now.
5
Michael_PJ
1y
I've never noticed this guideline! If this is the case, I would prefer to make it technically harder to do. I've just been doing it if I feel somewhat strongly about the issue...

If without restraints then note: this opens up an influence market, which could lead into plutocracy.

I, as an individual, agree with the statement. No one is infallible, every organization has smaller or bigger problems.

On the other hand, idols and community leaders provide an easy point for concentration of force. A few big coalitions have a larger impact than many scattered small groups, and if someone wants to organize a campaign a few leaders can reach a decision much faster than a large group of individuals.

If no one agrees on what to do then the movement of the Movement will grind to a halt. That's why there is value in keeping EA high-trust, and so... (read more)

This is my favourite drama. In my interpretation it's more about AI risk (the last idea we need, the invention of all inventions), but Durrenmatt was limited by the technology of his age. I mean, if you think Solomon is the AI character, then the end of the play is about Solomon excaping the "box" while trapping their creators inside.

I like conspiracy theories, but an economic one is more probable than a political governmental affair. I think Coinbase and Binance may did something in the no-law-only-code world of crypto, but the target was FTX, a very visible competitor; the funds for EA activities were just collateral damage. To mitigate risks like that, EA aligned organizations should not rely on single source for funding.

1
trevor1
1y
1. This is not a conspiracy theory, it's basic risk management for any large organization that has its own ideas about various nation-state level status quos such as biosecurity. 2. I stated that it was probably economic, and probably not governmental. This is basic risk management. A 94/6% risk split clearly merits further investigation, given that 6% chance is covert activity targeting EA.

I agree. The motto is "doing good better" not doing good the best.

I make a big assumption, that the utility gains are multiplied together. There is some basis to it like if there are some independent sources of fatality, the chance to survive all of them is the product of the survival chances for each fatality source.

If you want to maximise the result of the multiplication, then take the logarithm, and it turns into a sum. In that formulation, you can see that it's not the absolute change that is important, but the relative one. Here I wanted to show an example of it, like a risky vs safe bet over 1 vs 50 year, but I kinda got stuck, and realized I don't really understand it, so I retract, but thanks for the question.

Could you describe in other words what you mean by "friend group"?

While a group formed around hiking, tabletop games or some fanfic may not solve AI (ok the fanfic part might), but friends with a common interest in ships and trains probably have an above-average shot at solving global logistic problems.

7
mlsbt
2y
I’m using ‘friend group’ as something like a relatively small community with tight social ties and large and diverse set of semi-reliable identifiers. EA attracts people who want to do large amounts of good. Weighted by engagement, the EA community is made up of people for whom this initial interest in EA was reinforced socially or financially, often both. Many EAs believe that AI alignment is an extremely difficult technical problem, on the scale of questions motivating major research programs in math and physics. My claim is that such a problem won’t be directly solved by this relatively tiny subset of technically-inclined do-gooders, nice people who like meet-ups and have suspiciously convergent interests outside of AI stuff. EA is a friend group, algebraic geometers are not. Importantly, even if you don’t believe alignment is that difficult, we’d still solve it more quickly without tacking on this whole social framework. It worries me that alignment research isn’t catching on in mainstream academia (like climate change did); this seems to indicate that some factor in the post above (like groupthink) is preventing EAs from either constructing a widely compelling argument for AI safety, or making it compelling for outsiders who aren’t into the whole EA thing. Basically we shouldn’t tie causes unnecessarily to the EA community - which is a great community - unless we have a really good reason.

While I think this post touches on some very important points - the EA, as a movement, should be more conscious about its culture - the proposed solution would be terrible in my opinion.

Splitting up EA would mean losing a common ground. Currently, resource allocation for different goals can be made under the "doing good better" principles, whatever that means. Without that, the causes would compete with each other for talent, donors, etc., and with that networks would fragment, and efficiency would decrease.

However, the EA identifying people should more c... (read more)

Probability theoretic "better" is intransitive. See non-transitive dice

Imagine your life is a dice, and you have three options:

  • 4 4 4 4 4 1
    • You live a mostly peaceful life, but there is a small chance of doom.
  • 5 5 5 2 2 2
    • You go on a big adventure: either a trasure or a disappointment.
  • 6 3 3 3 3 3
    • You put all your cards in a lottery for epic win, but on fail, you will carry that with you.

If we compare them: peace < adventure < lottery < peace, so I would deny transitivity.

5
OscarD
2y
The intransitive dice work because we do not care about the margin of victory.  In expected value calculations the same trick does not work, so these three lives are all equal, with expected value 7/2

You say the first throw has an expected value of 693,5 (=700•215/216 -700•1/216) QALY, but it is not precise. The first throw has has an expected value of 693,5 QALY if your policy is to stop after the first throw.

If you continue, then the QALY gained from these new people might decrease, because in the future there is a greater chance that this 10 new people disappear, therefore decreasing the value of creating them.