All of TyQ's Comments + Replies

We need 40,000h or maybe even 20,000h

Glad to hear that you found this useful!

 Do you know of any companies that are hiring HRI designers?

Sorry, I know nothing about the HRI space :(

We need 40,000h or maybe even 20,000h

Hi Martyna, maybe this post and its comments can interest you. 

Also, something else that comes to mind: Andrew Critch thinks that working on Human-Robot Interaction may be very useful to AI Safety. Note that he isn't solely talking about robots, but also human-machine interaction in general (that's how I interpret it; I may well be wrong):

HRI research is concerned with designing and optimizing patterns of interaction between humans and machines—usually actual physical robots, but not always.

Not sure whether other AI Safety researchers would agree on t... (read more)

1martyna3mo
Thank you so much TyQ! I'll reach out to Lotte next week, seems like we will have a lot to discuss! Human-robot-interaction is something I never considered, but it sounds very interesting. HMI is basically the ground for my work, but it is applied very widely, from physical design (elevators, printers, cars) to SW design, and I'm in the second sector atm. But boy, do I dream to get to the first one. Do you know of any companies that are hiring HRI designers?
The Case for Rare Chinese Tofus

Thanks for the post, it's really exciting!

One very minor point:

In China, tofu is a symbol of poverty—a relic from when ordinary people couldn’t afford meat. As such, ordering tofu for guests is often seen as cheap and disrespectful.

I agree that this is somewhat true, but stating it like this seems a bit unfair. Ordering tofu for guests seems fine to me; It only gets problematic when you order way too much of it - in the same way as ordering nothing but rice for guests is extremely disrespectful. (Conflict of Interest: I'm a tofu lover!)

Anyway, I really like your idea! Good luck :)

7George Stiffman4mo
Fair enough! I definitely stated that point too strongly, more of a "if you just order tofu for guests, without much meat/seafood, it could come across as rude." Thanks for the pointer! And glad to meet another tofu lover :)
Open Thread: Spring 2022

Thanks for the suggestion, but I'm currently in college, so it's impossible for me to move :)

Open Thread: Spring 2022

Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.

On your specific argument that longtermist work doesn't affect non-humans:

  • X-risks aren't the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
  • I think X-risk work does affect non-humans. Linch's comment mentions one possi
... (read more)
Open Thread: Spring 2022

From a consequentialist perspective, I think what matters more is how these options affect your psychology and epistemics (in particular, whether doing this will increase or decrease your speciesist bias, and whether doing this makes you uncomfortable), instead of the amount of suffering they directly produce or reduce. After all, your major impact on the world is from your words and actions, not what you eat.

That being said, I think non-consequentialist views deserve some considerations too, if only due to moral uncertainty. I'm less certain about what ar... (read more)

1Lucas Lewit-Mendes4mo
Thanks TianyiQ, these are really interesting and useful thoughts!
1utilitarian014mo
Might be irrelevant, but have you considered moving to the US for the increased salary?
Open Thread: Spring 2022

Currently, EA resources are not gained gradually year by year; instead, they're gained in big leaps (think of Openphil and FTX). Therefore it might not make sense to accumulate resources for several years and give them out all at once. 

In fact, there is a call for megaprojects in EA, which echos your point 1 and 3 (though these megaprojects are not expected to funded by accumulating resources over the years, but by directly deploying existing resources). I'm not sure if I understand your second point though. 

A guided cause prioritisation flowchart

Thanks for the reply, your points make sense! There is certainly a problem of "degree" to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn't raising my points to dismiss your approach; Instead, they're things that I think need to be taken care of, if we're to take such approach.)

I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.

Caveat: I haven't spend mu... (read more)

A guided cause prioritisation flowchart

Interesting idea, thanks for doing this! I agree it's good to have more approachable cause prioritization models, but there're also associated risks to be careful about:

  • A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it's spread out on the Internet it can't be taken back.
  • A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; in
... (read more)
5Jack Malde5mo
Thanks for this, you raise a number of useful points. I guess this risk could be mitigated by ensuring the model is frequently updated and includes disclaimers. I think this risk is faced by many EA orgs, for example 80,000 Hours, but that doesn't stop them from publishing advice which they regularly update. I like that idea and I certainly don't think my model is anywhere near final (it was just my preliminary attempt with no outside help!). There could be a process with engagement with prominent EAs to finalise a model. Also fair. However it seems that certain EA orgs such as 80,000 Hours do adopt certain views, naturally excluding other views (for which they have been criticised). Maybe it would make more sense for such a model to be owned by an org like 80,000 Hours which is open about their longtermist focus for example, rather than CEA which is supposed to represent EA as a whole. As I said to alexjrl, my idea for a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple. I don't think a flowchart can be 100% prescriptive and final, there are too many nuances to consider. I just want it to raise key considerations for EAs to consider. For example, I think it would be fine for an EA to end up at a certain point in the flowchart and then think to themselves that they should actually choose a difference cause area because there is some nuance that the flowchart didn't consider that means they ended up in the wrong place. That's fine - but it would still be good to have systematic process in my opinion that ensures EAs consider some really key considerations. Feedback like this is useful and could lead to updating the flowchart itself. I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning. Fair point. As
Monitoring Wild Animal Welfare via Vocalizations

While, to my knowledge, an artificial neural network has not been used to distinguish between large numbers of species (the most I found was fourteen, by Ruff et al., 2021)

Here is one study distinguishing between 24 species using bioacoustic data. I stumbled upon this study totally by coincidence, and I don't know if there're other studies larger in scale.

The study was carried out by the bioacoustics lab at MSR. It seems like some of their other projects might also be relevant to what we're discussing here (low confidence, just speculating).

Exposure to 3m Pointless viewers- what to promote?

Maybe it would be better to mention less about "do good with your money" and instead more about "do good with your time"? (to counter the misconception that EA is all about E2G)

Also, agreed that the message should be short and simple.

Which World Gets Saved

Closely related, and also important, is the question of "which world gets precluded". Different possibilities include:

  1. By reducing extinction risk from a (hypothetical) scenario in which Earth explodes and falls into pieces, we preclude a world in which there's no life (and therefore no powerful agent) on what previously was Earth.
  2. By reducing extinction risk from pandemics, we preclude a world in which there's no human on Earth, but possibly other intelligent species that have evolved to fill the niche previously occupied by humans.
  3. By reducing extinction ri
... (read more)
Shortform on Superrationality

After writing this down, I'm seeing a possible response to the argument above:

  • If we observe that Alice and Bob had, in the past, made similar decisions under equivalent circumstances, then we can infer that:
    • There's an above-baseline likelihood that Alice and Bob have similar source codes, and
    • There's an above-baseline likelihood that Alice and Bob have correlated sources of randomness.
    • (where the "baseline" refers to our prior)

 However:

  • It still rests on the non-trivial metaphysical claim that different "free wills" (i.e. different sources of randomness)
... (read more)
Shortform on Superrationality

One doubt on superrationality:

(I guess similar discussions must have happened elsewhere, but I can't find them. I am new to decision theory and superrationality, so my thinking may very well be wrong.)

First I present an inaccruate summary of what I want to say, to give a rough idea:

  • The claim that "if I choose to do X, then my identical counterpart will also do X" seems to (don't necessarily though; see the example for details) imply there is no free will. But if we in deed assume determinism, then no decision theory is practically meaningful.

Then I shall e... (read more)

1TyQ6mo
After writing this down, I'm seeing a possible response to the argument above: * If we observe that Alice and Bob had, in the past, made similar decisions under equivalent circumstances, then we can infer that: * There's an above-baseline likelihood that Alice and Bob have similar source codes, and * There's an above-baseline likelihood that Alice and Bob have correlated sources of randomness. * (where the "baseline" refers to our prior) However: * It still rests on the non-trivial metaphysical claim that different "free wills" (i.e. different sources of randomness) could be correlated. * The extent to which we update our prior (on the likelihood of correlated inputs) might be small, especially if we consider it unlikely that inputs could be correlated. This may lead to a much smaller weight of superrational considerations in our decision-making.
Artificial Suffering and Pascal's Mugging: What to think?

Thanks for the answers, they all make sense and upvoted all of them :)

So for a brief summary:

  • The action that I described in the question is far from optimal under EV framework (CarlShulman & Brian_Tomasik), and
  • Even it is optimal, a utilitarian may still have ethical reasons to reject it, if he or she:
    • endorses some kind of non-traditional utilitarianism, most notably SFE (TimothyChan); or
    • considers the uncertainty involved to be moral (instead of factual) uncertainty (Brian_Tomasik).
Consciousness research as a cause? [asking for advice]

Building conscious AI (in the form of brain emulations or other architectures) could possibly help us create a large amount of valuable artificial beings. Wildely speculative indulgence: being able to simulate humans and their descendents could be a great way to make the human species more robust to most existing existential risks (if it is easy to create artificial humans that can live in simulations then humanity could becomes much more resilient)

That would pose a huge risk of creating astronomical suffering too. For example, if someone decides to do a conscious simulation of natural history on earth, that would be a nightmare for those who work on reducing s-risks.

Why doesn't EA Fund support Paypal?

Thanks for the detailed answer!

Why doesn't EA Fund support Paypal?

Good idea, I'll consider that. Thanks!