Mati_Roy

Mati_Roy's Comments

Mati_Roy's Shortform

Although, maybe the EA Community has a certain prestige that make it a good position from which to propagate ideas through society. So if, for example, the EA Community broadly acknowledged anti-aging as an important problem, even without working much on it, it might get other people to work on it that would have otherwise worked on something less important. So in that sense it might make sense. But still, I would prefer if it was phrased more explicitly as such, like "The EA Community should acknowledge X has an important problem".

Posted a similar version of this comment here: https://www.facebook.com/groups/effective.altruists/permalink/3166557336733935/?comment_id=3167088476680821&reply_comment_id=3167117343344601

Mati_Roy's Shortform

Every once in a while, I see someone write something like "X is neglected in the EA Community". I dislike that. The part about "in the EA Community" seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.

EA Forum feature suggestion thread

Have a nice format for linkpost in shortform.

With the goal of having the forum fully replace the EA subreddit at some point.

Act utilitarianism: criterion of rightness vs. decision procedure
Newcomb's Trolley Problem
A fortune-teller with a so-far perfect record of predictions has placed either 0 or 100 persons in an opque box some distance down the track. If the fortune-teller predicted you will pull the lever, killing the 5 people tiedto the track, ze will have left the opaque box empty. If the fortune-teller predicted you will NOT pull the lever (avoiding the 5 people tied to the track but still hitting the box), ze will have placed 100 people into the opaque box. Since the fortune-teller has already made zir choice of how many people to put into the opque box, is it more rational to pull the lever or not?

Accompanying image: https://photos.app.goo.gl/LvaVQye6tJBVqw2k8

Here, the act that fulfil the criterion of rightness is the opposite of the act you will take, whether you pull the lever or not (by the design of the thought experiment).

The decision procedure that maximize the criteria of rightness is to pull the lever (under a few further assumptions, such as: no quantum mixed strategies, no other superbeings punishing you for having this decision procedure).

What do you think about bets of the form "I bet you will switch to using this product after trying it"?

I just tried Roam for a few minutes. I also noticed I had tried it already in December 2019.

My current favorite note taking apps are Gdoc and https://coda.io/

What do you think about bets of the form "I bet you will switch to using this product after trying it"?

I love it! I've been thinking about this for years, and I hope more people try this. The bet would act as insurance for the time I put exploring the product.

Can I archive the EA forum on the wayback machine (internet archive, archive.org) ?

AFAIK, JP Addison is the only dev for the EA Forum, and the main code base is maintained by the LW team: Jim Babcock, Oliver Habryka, Ruby Bloom, Raymond Arnold.

Load More