TC

Timothy Chan

614 karmaJoined Aug 2021

Bio

Aspiring suffering-reducer and empirical AGI safety researcher from Hong Kong. Often daydreaming about consciousness. Some preference for traditional liberalism.

I like to learn. My main intellectual influences/'first contact'/impact on me: Tomasik/2016/90%, Haidt/2022/70%, Schopenhauer/2015/40%, Dawkins/2010-11/40%, Jesus/1999/10%

Comments
61

FWIW, Brian Tomasik does a fuzzies/utilons split thing too. One justification is that it helps avoid cognitive dissonance between near-term causes and, in his mind, more effective longtermist causes.

My position, in contrast, is that I acknowledge the epistemic force of far-future arguments but maintain some commitment to short-term helping as an intrinsic spiritual impulse. Along the lines of Occam's imaginary razor, this allows me to avoid distorting my beliefs about the far-future question based on emotional pulls to stop torture-level suffering in the present. In the face of emotion-based cognitive dissonance, it's often better to change your values than to change your beliefs.

It might be overly confusing to call it "changing [my ideal] values". It's more that I have preferences for both. Some that seem like ones I would ideally like to keep (minimizing suffering in expectation), but some that as a human, for better or worse, I have (drives to reduce suffering in front of me, sticking to certain principles...). 

If the price of a split in donations/personal focus results in me becoming more effective at the far-future stuff that I think is more important for utilons, in a way that makes those utilons go up, then that seems worth it.

Yeah, in a scenario with "nation-controlled" AGI, it's hard to see people from the non-victor sides not ending up (at least) as second-class citizens - for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear - if it's even possible. Heck, even allies of the leading state might be worried - doesn't feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.)

Wouldn't rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race.

Political polarization on both sides in the US is also very scary.

I generally agree with the meritocratic perspective. It seems a good way (maybe the best?) to avoid tit-for-tat cycles of "those holding views popular in some context abuse power -> those who don't like the fact that power was abused retaliate in other contexts -> in those other contexts, holding those views results in being harmed by people in those other contexts who abuse power".

Good point about the priors. Strong priors about these things seem linked to seeing groups as monoliths with little within-group variance in ability. Accounting for the size of variance seems under-appreciated in general. E.g., if you've attended multiple universities, you might notice that there's a lot of overlap between people's "impressiveness", despite differences in official university rankings. People could try to be less confused by thinking in terms of mean/median, variance, and distributions of ability/traits more, rather than comparing groups by their point estimates.

Some counter-considerations:

  • Religion and race seem quite different. Religion seems to come with a bunch of normative and descriptive beliefs that could affect job performance - especially in EA - and you can't easily find out about those beliefs in a job interview. You could go from one religion to another, from no religion to some religion, or some religion to no religion. The (non)existence of that process might give you valuable information about how that person thinks about/reflects on things and whether you consider that to be good thinking/reflection. 
    • For example, from a irreligious perspective, it might be considered evidence of poor thinking if a candidate thinks the world will end in ways consistent with those described in the Book of Revelation, or think that we're less likely to be in a simulation because a benevolent, omnipotent being wouldn't allow that to happen to us.
    • Anecdotally, on average, I find that people who have gone through the process of abandoning the religion they were raised with, especially at a young age, to be more truth-seeking and less influenced by popular, but not necessarily true, views.
  • Religion seems to cover too much. Some forms of it seems to offer immunity to act in certain ways, and the opportunity to cheaply attack others if they disagree with it. In other communities, religion might be used to justify poor material/physical treatment of some groups of people, e.g. women and gay people. While I don't think being accepting of those religions will change the EA community too much, it does say something to/negatively affect the wider world if there's sufficient buy-in/enough of an alliance/enough comfort with them.

But yeah, generally, sticking to the Schelling point of "don't discriminate by religion (or lack-thereof)" seems good. Also, if someone is religious and in EA (i.e., being in an environment that doesn't have too many people who think like them), it's probably good evidence that they really want to do good and are willing to cooperate with others to do so, despite being different in important ways. It seems a shame to lose them.

I've been doing a 1-year "conversion master's" in CS (I previously studied biochemistry). I took as many AI/ML electives as I'm permitted to/can handle, but I missed out on an intro to RL course. I'm planning to take some time to (semi-independently) up-skill in AI safety after graduating. This might involve some projects and some self-study.

It seems like a good idea to be somewhat knowledgeable on RL basics going forward. I've taken (paid) accredited, distance/online courses (with exams etc.) concurrently with my main degree and found them to be higher quality than common perception suggests - although it does feel slightly distracting to have more on my plate.

Is it worth doing a distance/online course in RL (e.g. https://online.stanford.edu/courses/xcs234-reinforcement-learning ) as one part of the up-skilling period following graduation? Besides the Stanford online one that I've linked, are there any others that might be high quality and worth looking into? Otherwise, are there other resources that might be good alternatives?

So in my comment I was only trying to say that the comment you responded to seemed to point to something true about the preferences of women in general vs. the preferences of women who are "highly educated urban professional-managerial class liberals in the developed world".

Such perspectives seem easy to miss for people (in general/of all genders, not just women) belonging to the elite U.S./U.S.-adjacent progressive class - a class that has disproportionate influence over other cultures, societies etc., which makes it seem worthwhile to discuss in spaces where many belong to this class.

About your other point, I guess I don't have much of an opinion on it (yet), but my initial impression is that it seems like openness comes in degrees. Compared to other movements, I also rarely observe 'EA' openly declaring itself hostile to something (e.g. "fraud is unacceptable" but there aren't really statements on socialism, conservatism, religions, culture...).

There might be differences between identifying with feminism and 'being open to scholars of feminism, queer studies and gender studies' though. Most Americans probably aren't familiar with academia to know of its latest thinking.

And like how different people have different notions of what counts as discriminatory, racist, sexist, or not discriminatory, racist, sexist, it's possible that different people have different notions of what 'feminism' means. (Some might consider it a position supporting equal rights between the sexes - others a position supporting women's rights. They might be thinking of the second, third, or fourth wave etc.) 

The supplementary document containing the survey questions suggests the question asked was "How well, if at all, do each of the following describe you?" followed by "Environmentalist", "Feminist" and "A supporter of gun rights" (in random order), which doesn't seem to specify one specific notion of 'feminist' for survey participants to consider. 

Although, to be fair, maybe there's actually more agreement among Americans on the definition of feminist (in the year of the survey, 2020) than I'm expecting.

In any case, I expect the differences in preferences of elite Anglosphere/U.S. women, and not-necessarily-elite, non-Anglosphere/non-U.S. women in general (e.g., in Europe, Asia, South America) would still be quite large.

(Presumably if coding can be done faster, AI can be created more quickly too)

Wait, which mechanisms did you have in mind? 

AI -> software coded up faster -> more software people go into AI -> AI becomes more popular?

AI -> coding for AI research is easier -> more AI research

AI -> code to implement neural networks written faster -> AI implemented more quickly (afaik not too big a factor? I might be wrong though)

AI -> code that writes e.g. symbolic AI from scratch -> AI?

Answer by Timothy ChanApr 13, 202350

Yeah, there might be no end to how much you can understand about EA (or, more generally, stuff about the world that's relevant to altruism).

I certainly have my own blindspots but when talking to many other EAs I do notice that there a lot of topics they seem unfamiliar with:

  • The extent of the uncertainty we have about the philosophy of mind
  • Chances of being in a simulation/percentage of copies in a simulation and how that affects the expected value of various actions
  • Philosophy of science/core assumptions behind how one thinks the world works
  • Views of people in other parts of the world/society. 
  • Relatedly, how futures led by influential people in other countries might compare to futures led by influential people in their own countries.
  • Reasons to think that there are quite a lot of things that are low tractability
  • Noticing how they came to be who they are/their place in history
  • Impact of various activities on wild animal suffering

I don't claim to know everything about the above. And of course, others who know more about other things might notice that there are a lot topics I'm unfamiliar with. Some topics I haven't really thought about that much relative to a lot of people working at EA organizations:

  • Anthropics
  • Alien civilizations, how that affects priorities in longtermism
  • Ethical theories that are super formalized (my moral anti-realism doesn't make me that motivated to look into them)
  • Acausal interactions and decision theory

Note that I was specifically talking about people (of all genders/in general) in parts of the Anglosphere being "sensitive". I'll quote myself.

In parts of the Anglosphere, people seem more sensitive to an extent that in some cases I would consider them to be overreaching.

Of course, it's also influencing much outside of it.

Although, there does seem to be a phenomenon where a combination of being young, female, and being politically liberal, makes someone particularly vulnerable to anxiety and depression. This seems to have also increased in recent years in the U.S. https://jonathanhaidt.substack.com/p/mental-health-liberal-girls I do prefer that we can reverse such trends.

EDIT: Apart from quoting a part of my previous comment and stating a preference for there to be less anxiety and depression, everything in this comment is purely descriptive. Are people strong downvoting over offense over that? It's really not a good sign of community epistemic health. 

If you do want my (normative) opinions on all this, I think it's beneficial and possible for the subset of people in Anglosphere whom I was referring to, to reverse recent trends and become more resilient. There is currently a combination of high false positive rates + expanded notions of perceived malice and of harm, which isn't very good for your democratic societies, in my opinion.

Load more