All of ๐•ฎ๐–Ž๐–“๐–Š๐–—๐–†'s Comments + Replies

Refuting longtermism with Fermat's Last Theorem

I disagree that the unknowns cannot be reasoned about.

There are known unknowns and unknown unknowns, and we can quantify that with " uncertainty".

You can say: "here's this thing I know exists, but I have no measure of it. I estimate it at x".

You can also quantify "unknown unknowns". You can say "there are things that I don't know, and I'm not even aware of them". You can make estimates about this as well.

You can go even further. When considering your model, you can have uncertainty about the accuracy of your model. You can quantify your uncertainty about y... (read more)

1astupple3d
Making an estimate about something you're unaware of is like guessing the likelihood of the discovery of nuclear energy in 1850. I can put a number on the likelihood of discovering something totally novel, but applying a number doesn't mean it's meaningful. A psychic could make quantified guesses and tell us about the factors involved in that assessment, but that doesn't make it meaningful.
So, I Want to Be a "Thinkfluencer"

May I ask why you started by learning category theory?

I started it on a whim (someone linked a LW post on it on Twitter) and I found it engaging enough to stick with it. I don't want to quit it because I am trying to break my habit of abandoning projects I start. It's also not the case that I am finding it too difficult to progress. I do think I'm slow going, but a more mathematically literate friend disagreed (from their perspective, I was going pretty fast), so I think I should stick with the project, so that I can form a habit of following my projects t... (read more)

So, I Want to Be a "Thinkfluencer"

Why would posting mainly in these tiny communities be the best approach? First, I think these communities are already far more familiar with the topics you plan to publish on than the average reader. Second, they are โ€“ as I said โ€“ tiny. 

The reports are for my "learning about the world" phase, not attempts at becoming a public intellectual. 

As for why LW/EAF:

  • Feedback from my communities is more important to give me sustainable motivation than feedback from randoms
  • I'm more likely to get valuable feedback from these communities than others especiall
... (read more)
So, I Want to Be a "Thinkfluencer"

Is promotion to the Frontpage automated? Do mods take an approve by default and then demote stuff they think is not worth promoting?

I'm used to the LessWrong approach of blogpost by default, and mods manually approve promotion to the front page. I'm not sure I want to make the decision for the community if my post should be on the front page. But I do want it to be promoted to the front page if the mods approve it.

I feel like promotion to front page by default will disincentivise demotion to personal blog post if a post has high engagement irrespective of whether it's otherwise frontpage material.

2Larks4d
As I understand it, posts are frontpage by default unless you or a mod decide otherwise.
War Between the US and China: A case study for epistemic challenges around China-related catastrophic risk

Thanks for making this case. More investment in China studies seems straightforwardly valuable.

(Still reading.)

1Jordan_Schneider7d
thanks for reading!
Are "Bad People" Really Unwelcome in EA?

That sounds fair.

"You shouldn't fund/patronise me or support my research" is probably a recommendation I'd be loathe to make. (Excluding cases where I'm already funded well enough that marginal funding is not that helpful.)

Selflessly rejecting all funding because I'm not the best bet for this particular project is probably something that I'd be unwilling to do.

(But in practice, I expect that probabilistic reasoning would recommend funding me anyways. I think having enough confidence to justify not funding a plausible pathway is unlikely before it's too late.)

But yeah, I think this is an example of where selfishness would be an issue.

Thanks for the reply!

2Nathan Ashby10d
In all fairness, I expect most people would be very reluctant to recommend that resources be directed away from the causes or organisations that give them status. Having an aversion to "selfishness" might overcome this, but more likely it would just make them invent reasons why their organisation/area really is very important.
Are "Bad People" Really Unwelcome in EA?

I plan to seek status/glory through making the world a better place.

That is, my desire for status/prestige/impact/glory is interpreted through an effective altruistic like framework.

"I want to move the world" transformed into "I want to make the world much better".

"I want to have a large impact" became "I want to have a large impact on creating a brighter future".

I joined the rationalist community at a really impressionable stage. My desire for impact/prestige/status, etc. persisted, but it was directed at making the world better.

I think the question det

... (read more)
Are "Bad People" Really Unwelcome in EA?

I now think it was a mistake/misunderstanding to describe myself as non altruistic and believe that I was using an unusually high standard.

(That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA. Churches advocate a 10% tithe as well [which I didn't pay because I wasn't actually a Christian (I deconverted at 17 and open atheism is not safe, so I've hidden [and still hide] it)], but it did make me predisposed to putting up with that level of sacrifice [I'd faced a lot o... (read more)

3Daniel_Eth6d
"That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA" If this sentiment is at all widespread among people on the periphery of EA or who might become EA at some point, then I find that VERY concerning. We'd lose a lot of great people if everyone assumed they couldn't join without making that kind of sacrifice.
Are "Bad People" Really Unwelcome in EA?

I think I've been defining "altruism" in an overly strict sense.

Rather than say I'm not altruistic, I mostly mean that:

  • I'm not impartial to my own welfare/wellbeing/flourishing
  • I'm much less willing to undertake personal hardship (frugality, donating the majority of my income, etc.) and I think this is fine

10% is not that big an ask (I can sacrifice that much personal comfort), but donating 50% or forgoing significant material comfort would be steps I would be unwilling to take.

(Reorienting my career doesn't feel like a sacrifice because I'll be able to have a larger positive impact through the career switch.)

5Lorenzo10d
To me, those are very different claims! That's very relative! It's more than what the median EA gives, it's way more than what the median non-EA gives. When I talk to non-EA friends/relatives about giving, the thought of giving any% is seen as unimaginably altruistic. Even people donating 50% are not donating 80%, and some would say it's not that big of an ask. IMHO, claiming that only people making huge sacrifices and valuing their own wellbeing at 0 can be considered "altruists" is a very strong claim that doesn't match how the word is used in practice. As Wikipedia says [https://en.wikipedia.org/wiki/Altruism]:
Are "Bad People" Really Unwelcome in EA?

I have a significantly consequentialist world view.

I am motivated by the vision of a much better world.

I am trying to create such a better world. I want to devote my career to that project.

I'm trying to optimise something like "expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me".

The ways I perceive that I differ from EAs is:

  • Embracing my desire for status/prestige/glory/honour
  • I'm not impartial to my own welfare/wellbeing/flourishing
  • I'm much less willing to undertake pers
... (read more)
4iporphyry10d
If this is true, then I think you would be an EA. But from what you wrote it seems that you have a relatively large term in your philosophical objective function (as opposed to your revealed objective function, which for most people gets corrupted by personal stuff) on status/glory. I think the question determining your core philosophy would be which term you consider primary. For example if you view them as a means to an end of helping people and are willing to reject seeking them if someone convinces you they are significantly reducing your EV then that would reconcile the "A" part of EA. A piece of advice I think younger people tend to need to hear is that you should be more willing to accept that "X is something I like and admire, and I am also not X" without having to then worry about your exact relationship to X or redefining X to include themselves (or looking for a different label Y). You are allowed to be aligned with EA but not be an EA and you might find this idea freeing (or I might be fighting the wrong fight here).
Are "Bad People" Really Unwelcome in EA?

I'm a rationalist.

I take scope sensitivity very seriously.

Impartiality. Maybe I'm more biased towards rats/EAs, but not in ways that seem likely to be decision relevant?

You could construct thought experiments in which I wouldn't behave in an ideal utilitarian way, but for scenarios that actually manifest in the real world, I think I can be approximated as following some strain of preference utilitarianism?

3Linch10d
I'm trying to question In the abstract, rather than talking about you specifically.
Are "Bad People" Really Unwelcome in EA?

Personal impact on a brighter world.

I'm not a grant maker and don't want to be.

I am not aware of any realistic scenario where I would act differently from someone who straightforwardly wanted to improve the world altruistically.

(The scenarios in which I would seem very contrived and unlikely to manifest in the real world.)

Could you describe a realistic scenario in which you think I'd act meaningfully different from an altruistic person in a way that would make me a worse employee/coworker?

5NunoSempere10d
So the problem with this is that I don't know you. That said, here is my best shot: In this example, as perhaps in others, capabilities really matter. For example, people have previously mentioned offering Terence Tao a few million to work on AI alignment, and his motivations there presumably wouldn't matter, just the results.
Are "Bad People" Really Unwelcome in EA?

But that's mostly relevant in small scale altruism? Like I wouldn't give to beggars on the street. And I wouldn't make great personal sacrifice (e.g. frugal living, donating the majority of my income to charity [I was donating 10% to GiveWell's maximimum impact fund until a few months ago (Forex issues [I'm in Nigeria], now I'm unemployed)]) to improve the lives of others.

But I would (and did!) reorient my career to work on the most pressing challenges confronting humanity given my current/accessible skill set. I quit my job as a web developer, I'm going b... (read more)

3Lorenzo10d
I think this is very admirable and wish you success! If indeed you're acting exactly like someone who straightforwardly wanted to improve the world altruistically, that's what matters :) Edit: oh I see you were also donating 10%, that's also very altruistic! (At least from an outside view, I trust you on your motivations)
Are "Bad People" Really Unwelcome in EA?

Huh. If I had a bright idea for AI Safety, I'd share it and expect to get status/credit for doing so.

The idea of hiding any bright alignment research ideas I came up with didn't occur to me.

I'm under the impression that because of common sense morals (i.e. I wouldn't deliberately sabotage to get the chance to play hero), selfishly motivated EAs like me don't behave particularly different in common scenarios.

There are scenarios where my selfishness will be highlighted, but they're very, very narrow states and unlikely to materialise in the real world (highl... (read more)

2Max Clarke10d
Yeah the example above with choosing to not get promoted or not recieve funding is a more realistic scenario. I agree these situations are somewhat rare in practice. Re. AI Safety, my point was that these situations are especially rare there (among people who agree it's a problem, which is about states of knowledge anyway, not about goals) Thanks for this post, I think it's a good discussion.
Are "Bad People" Really Unwelcome in EA?

I think there are many EAs with "pure" motivations. I don't know what the distribution of motivational purity is, but I don't expect to be a modal EA.

I came via osmosis from the rat community (partly due to EA caring about AI safety and x-risk). I was never an altruistic person (I'm still not).

I wouldn't have joined a movement focusing on improving lives for the global poor (I have donated to GiveWell's Maximum Impact Fund, but that's due to value drift after joining EA).

This is to say that I think that pure EAs exist, and I think that's fine, and I think ... (read more)

On Deference and Yudkowsky's AI Risk Estimates

(I hadn't seen this reply when I made my other reply).

What do you think of legitimising behaviour that calls out the credibility of other community members in the future?

I am worried about displacing the concrete object level arguments as the sole domain of engagement. A culture in which arguments cannot be allowed to stand by themselves. In which people have to be concerned about prior credibility, track record and legitimacy when formulating their arguments...

It feels like a worse epistemic culture.

3Karthik Tadepalli2mo
Expert opinion has always been a substitute for object level arguments because of deference culture. Nobody has object level arguments for why x-risk in the 21st century is around 1/6: we just think it might be because Toby Ord says so and he is very credible. Is this ideal? No. But we do it because expert priors are the second best alternative when there is no data to base our judgments off of. Given this, I think criticizing an expert's priors is functionally an object level argument, since the expert's prior is so often used as a substitute for object level analysis. I agree that a slippery slope endpoint would be bad but I do not think criticizing expert priors takes us there.
On Deference and Yudkowsky's AI Risk Estimates

To expand on my complaints in the above comment.

I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.

I think that's unhealthy and contrary to collaborative knowledge growing.

Yudkowsky has laid out his arguments for doom at length. I don't fully agree with those arguments (I believe he's mistaken in 2 - 3 serious and important ways), but he has laid them out, and I can disagree on the object level with him because of that.

Given that the explicit argument... (read more)

6Guy Raveh2mo
I don't think this is realistic. There is much more important knowledge than one can engage with in a lifetime. The only way of forming views about many things is to somehow decide who to listen to, or at least how to aggregate relevant more strongly based opinions (so, who to count as an expert and who not to and with what weight).

 > I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.

I think it's fair to talk about a person's lifetime performance when we are talking about forecasting. When we don't have the expertise ourselves, all we have to go on is what little we understand and the track records of the experts we defer to. Many people defer to Eliezer so I think it's a service to lay out his track record so that we can know how meaningful his levels of confidence and special insights into this kind of problem are. 

I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.

I think I roughly agree with you on this point, although I would guess I have at least a somewhat weaker version of your view. If discourse about people's track records or reliability starts taking up (e.g.) more than a fifth of the space that object-level argument does, within the most engaged core of people, then I do think that will tend to suggest an unhealthy or at least not-very-intellectuall... (read more)

On Deference and Yudkowsky's AI Risk Estimates

I prefer to just analyse and refute his concrete arguments on the object level.

I'm not a fan of engaging the person of the arguer instead of their arguments.

Granted, I don't practice epistemic deference in regards to AI risk (so I'm not the target audience here), but I'm really not a fan of this kind of post. It rubs me the wrong way.

Challenging someone's overall credibility instead of their concrete arguments feels like bad form and [logical rudeness] (https://www.lesswrong.com/posts/srge9MCLHSiwzaX6r/logical-rudeness).

I wish EAs did not engage in such be... (read more)

3๐•ฎ๐–Ž๐–“๐–Š๐–—๐–†2mo
To expand on my complaints in the above comment. I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments. I think that's unhealthy and contrary to collaborative knowledge growing. Yudkowsky has laid out his arguments for doom at length. I don't fully agree with those arguments (I believe he's mistaken in 2 - 3 serious and important ways), but he has laid them out, and I can disagree on the object level with him because of that. Given that the explicit arguments are present, I would prefer posts that engaged with and directly refuted the arguments if you found them flawed in some way. I don't like this direction of attacking his overall credibility. Attacking someone's credibility in lieu of their arguments feels like a severe epistemic transgression. I am not convinced that the community is better for a norm that accepts such epistemic call out posts.

I prefer to just analyse and refute his concrete arguments on the object level.

I agree that work analyzing specific arguments is, overall, more useful than work analyzing individual people's track records. Personally, partly for that reason, I've actually done a decent amount of public argument analysis (e.g. here, here, and most recently here) but never written a post like this before.

Still, I think, people do in practice tend to engage in epistemic deference. (I think that even people who don't consciously practice epistemic deference tend to be influ... (read more)

Dragon God's Shortform

There is an ongoing "friend matching" campaign for GiveWell.

Anyone who donates through a friend link will have their donation matched up to $250. Please donate.

My friend link.

5Kirsten2y
I don't totally understand what's going on here. If I used your link to donate to AMF, where would the match be coming from? a) other, unrestricted GiveWell donations b) a donor who specifically wanted to match first time donors c) someone else...?