All of aaron_mai's Comments + Replies

1
Yanni Kyriacos
14d
Good to know :) would you mind providing an example or hypothetical of the type of feedback?
2
NunoSempere
3mo
To elaborate a bit on the offer in case other people search the forum for printing to pdfs, this happens to be a pet issue. See here: for a way to compile a document like this to a pdf like this one. I am very keen on the method. However, it requires people to be on Linux, which is a nontrivial difficulty. Hence the offer.
1
rileyharris
3mo
I really like this! 

Hey! I applied end of april and haven't received any notification like this nor a rejection and I'm not sure what this means about the status of my application. I emailed twice over the past 4 months, but haven't received a reply :/

5
calebp
7mo
I'm sorry there was such a long delay and we missed your emails. I think you should have received an email just now explaining the situation with your application. If you didn't receive an email, please dm me on the forum and I'll chase that up.

Most of the researchers at GPI are pretty sceptical of AI x-risk.


Not really responding to the comment (sorry), just noting that I'd really like to understand why these researchers at GPI and careful-thinking AI alignment people - like Paul Christiano - have such different risk estimates!  Can someone facilitate and record a conversation? 

David Thorstadt, who worked at GPI, Blogs about reasons for his Ai skepticism (and other EA critiques) here https://ineffectivealtruismblog.com/

The object-level reasons are probably the most interesting and fruitful, but for a complete understanding of how the differences might arise, it's probably also valuable to consider:

  • sociological reasons
  • meta-level incentive reasons 
  • selection effects

An interesting exercise could be to go through the categories and elucidate 1-3 reasons in each category for why AI alignment people might believe X and cause prio people might believe not X.

4
kokotajlod
1y
Alright, let's make it happen! I'll DM you + Timothy + anyone else who replies to this comment in the next few days, and we can arrange something.

I find it remarkable how little is being said about concrete mechanisms for how advanced AI would destroy the world by the people who most express worries about this. Am I right in thinking that? And if so, is this mostly because they are worried about infohazards and therefore don't share the concrete mechanisms they are worried about?

I personally find it pretty hard to imagine ways that AI would e.g. cause human extinction that feel remotely plausible (allthough I can well imagine that there are plausible pathways I haven't thought of!)

Relatedly, I wonde... (read more)

2
StevenKaas
1y
We tried to write a related answer on Stampy's AI Safety Info: How could a superintelligent AI use the internet to take over the physical world? We're interested in any feedback on improving it, since this is a question a lot of people ask. For example, are there major gaps in the argument that could be addressed without giving useful information to bad actors?

It seems a lot of people are interested in this one! For my part, the answer is "Infohazards kinda, but mostly it's just that I haven't gotten around to it yet." I was going to do it two years ago but never finished the story.

If there's enough interest, perhaps we should just have a group video call sometime and talk it over? That would be easier for me than writing up a post, and plus, I have no idea what kinds of things you find plausible and implausible, so it'll be valuable data for me to hear these things from you.

2
Esben Kran
1y
The focus of FLI on lethal autonomous weapons systems (LAWS) generally seems like a good and obvious framing for a concrete extinction scenario. Currently, a world war will without a doubt use semi-autonomous drones with the possibility of a near-extinction risk from nuclear weapons.  A similar war in 2050 seems very likely to use fully autonomous weapons under a development race, leading to bad deployment practices and developmental secrecy (without international treaties). With these types of "slaughterbots", there is the chance of dysfunction (e.g. misalignment) leading to full eradication. Besides this, cyberwarfare between agentic AIs might lead to broad-scale structural damage and for that matter, the risk of nuclear war brought about through simple orders given to artificial superintelligences. The main risks to come from the other scenarios mentioned in the replies here are related to the fact that we create something extremely powerful. The main problems arise from the same reasons that one mishap with a nuke or a car can be extremely damaging while one mishap (e.g. goal misalignment) with an even more powerful technology can lead to even more unbounded (to humanity) damage.  And then there are the differences between nuclear and AI technologies that make the probability of this happening significantly higher. See Yudkowsky's list.
5
harfe
1y
Indeed, the specifics of killing all humans don't receive that much attention. I think partially this is because the concrete way of killing (or disempowering) all humans does not matter that much for practical purposes: Once we have AI that is smarter than all of humanity combined, wants to kill all humans, and is widely deployed and used, we are in an extremely bad situation, and clearly we should not build such a thing (for example if you solve alignment, then you can build the AI without it wanting to kill all humans). Since the AI is smarter than humanity, the AI can come up with plans that humans does not consider. And I think there are multiple ways for a superintelligent AI to kill all humans. Jakub Kraus mentions some ingredients in his answer. As for public communication, a downside of telling a story of a concrete scenario is that it might give people a false sense of security. For example. if the story involves the AI hacking into a lot of servers, then people might think that the solution would be as easy as replacing all software in the world with formally verified and secure software. While such a defense might buy us some time, a superintelligent AI will probably find another way (eg earning money and buying servers instead of hacking into them.)
8
JakubK
1y
Note that GPT-4 can already come up with plenty of concrete takeover mechanisms:

This 80k article is pretty good, as is this Cold Takes post. Here are some ways an AI system could gain power over humans:

  • Hack into software systems
  • Manipulate humans
  • Get money
  • Empower destabilising politicians, terrorists, etc
  • Build advanced technologies
  • Self improve
  • Monitor humans with surveillance
  • Gain control over lethal autonomous weapons
  • Ruin the water / food / oxygen supply
  • Build or acquire WMDs
1
Riccardo
1y
@aaron_mai @RachelM  I agree that we should come up with a few ways that make the dangers / advantages of AI very clear to people so you can communicate more effectively. You can make a much stronger point if you have a concrete scenario to point to as an example that feels relatable. I'll list a few I thought of at the end. But the problem I see is that this space is evolving so quickly that things change all the time. Scenarios I can imagine being plausible right now might seem unlikely as we learn more about the possibilities and limitations. So just because in the coming month some of the examples I will give below might become unlikely doesn't necessarily mean that therefor the risk / advantages of AI have also become more limited. That also makes communication more difficult because if you use an "outdated" example, people might dismiss your point prematurely. One other aspect is that we're on human level intelligence and are limited in our reasoning compared to a smarter than human AI, this quote puts it quite nicely: > "There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious." - Yudkowsky, Staring into the Singularity. Two examples I can see possible within the next few iterations of something like GPT-4: - maleware that causes very bad things to happen (you can read up on Stuxnet to see what humans have been already capable of 15 years ago, or if you don't like to read Wikipedia there is a great podcast episode about it)   - detonate nuclear bombs   - destroy the electrical grid - get access to genetic engineering like crisper and then   - engineer a virus way worse than Covid     - this virus doesn't even have to be deadly, imagine it causes sterilization of humans   Both of the above seem very scary to me because the

I agree, and I actually have the same question about the benefits of AI. It all seems a bit hand-wavy, like 'stuff will be better and we'll definitely solve climate change'. More specifics in both directions would be helpful.

8
Ian Turner
1y
@EliezerYudkowsky has suggested nanobots and I could think of some other possibilities but I think they're infohazards so I'm not going to share them. More broadly, my expectation is that a superintelligent AI would be able to do anything that a large group of intelligent and motivated humans could do, and that includes causing human extinction.

I wonder to what extent people take the alignment problem to be the problem of (i) creating an AI system that reliaby does or tries to do what its operators want it do as opposed to (ii) the problem of creating an AI system that does or tries to do what is best "aligned with human values" (whatever this precisely means).

I see both definitions being used and they feel importantly different to me: if we solve the problem of aligning an AI with some operator, then this seems far away from safe AI. In fact, when I try to imagine how an AI might cause a catastr... (read more)

3
Jörn Stöhler
1y
I mostly back-chain from a goal that I'd call "make the future go well". This usually maps to value-aligning AI with broad human values, so that the future is full of human goodness and not tainted by my own personal fingerprints. Actually, ideally we first build an AI that we have the kind of control over so that the operators can make it do something that is less drastic than determining the entire future of humanity, e.g. slowing down AI progress to a halt until humanity pulls itself together and figures out more safe alignment techniques. That usually means making it corrigible or tool-like, instead of letting it maximize its aligned values. So I guess I ultimately want (ii) but really hope we can get a form of (i) as an intermediate step. When I talk about the "alignment problem" I usually refer to the problem that we by default get neither (i) nor (ii).

I'm pretty late to the party (perhaps even so late that people forgot that there was a party), but just in case someone is still reading this, I'll leave my 2 cents on this post. 

[Context: A few days ago, I released a post that distils a paper by Kenny Easwaran and others, in which they propose a rule for updating on the credences of others. In a (tiny) nutshell, this rule, "Upco", asks you to update on someones credence in proposition A by multiplying your odds with their odds.]

 1. Using Upco suggests some version of strong epistemic modesty: wh... (read more)

Thanks, this seems useful! :) One suggestion: if there are similar estimates available for other causes, could you add at least one to the post as a comparison? I think this would make your numbers more easily interpretable.

Hey Daniel, 

thanks for engaging with this! :)

You might be right that the geometric mean of odds performs better than Ucpo as an updating rule although I'm still unsure exactly how you would implement it. If you used the geometric mean of odds as an updating rule for a first person and you learn the credence of another person, would you then change the weight (in the exponent) you gave the first peer to 1/2 and sort of update as though you had just learnt the first and second persons' credence? That seems pretty cumbersome as you'd have to keep track o... (read more)

Thanks a lot for the update!  I feel excited about this project and grateful that it exists!

As someone who stayed at CEEALAR for ~6 months over the last year, I though I'd share some reflections that might help people decide whether going to the EA Hotel is a good decision for them. I'm sure experiences vary a lot, so, general disclaimer, this is just my personal data point and not some broad impression of the typical experience. 

Some of the best things that happened as a result of my stay:

  1.  I made at least three close friends I'm still in re
... (read more)

Yes, that would be awesome!

Why is there so much more talk about the existential risk from AI as opposed to the amount by which individuals (e.g. researchers) should expect to reduce these risks through their work? 

The second number seems much more decision-guiding for individuals than the first. Is the main reason that its much harder to estimate? If so, why?

5
Greg_Colbourn
1y
Here is an attempt by Jordan Taylor: Expected ethical value of a career in AI safety (which you can plug your own numbers into).

(Hastily written, sry)

I would love to see more of the theories of change that researchers in EA have for their own career! I'm particularly interested to see them in Global Priorities Research as its done at GPI (because I find that both extremely interesting and I'm very uncertain how useful it is apart from field-building).

Two main reasons: 

  1. It's not easy at all (in my experience) to figure out which claims are actually decision relevant in major ways. Seeing these theories of change might make it much easier for junior researchers to develop a "tast
... (read more)

How do EA grantmakers take expert or peer opinions on decision-relevant claims into account? More precisely, if there's some claim X that's crucial to an EA grantmakers' decision and probabilistic judgements from others are available on X (e.g. from experts) -- how do EA grantmakers tend to update on those judgements?

Motivation: I suspect that in these situations it's common to just take some weighted average of the various credences and use that as one's new probability estimate. I have some strong reasons to think that this is incompatible with bayesian updating (post coming soon).

2
Lorenzo Buonanno
1y
Do you have a specific example in mind? From what I see, there are many different kinds of EA grantmakers, and they seem to be using different processes, especially in the longtermist vs neartermist space. I don't think there's a single general answer to "how do grantmakers update on expert judgment".

I wonder if it would be good to create another survey to get some data not only on who people update on but also on how they update on others (regarding AGI timelines or something else). I was thinking of running a survey where I ask EAs about their prior on different claims (perhaps related to AGI development), present them with someone's probability judgements and then ask them about their posterior.  That someone could be a domain expert, non-domain expert (e.g., professor in a different field) or layperson (inside or outside EA). 

At least if ... (read more)

Cool idea to run this survey and I agree with many of your points on the dangers of faulty deference.

A few thoughts:

(Edit: I think my characterisation of what deference means in formal epistemology is wrong. After a few minutes of checking this, I think what I described is a somewhat common way of modelling how we ought to respond to experts)

  1. The use of the concept of deference within the EA community is unclear to me. When I encountered the concept in formal epistemology I remember "deference to someone on claim X" literally meaning (a) that you adopt t

... (read more)
2
Evan_Gaensbauer
1y
In addition to the EA Forum topic entry, there is a forum post from a few months ago by a researcher on topics related to epistemology in EA named Owen Cotton-Barratt reviewing a taxonomy of common types of deference in EA, and open issues with them, that I found informative. https://forum.effectivealtruism.org/posts/LKdhv9a478o9ngbcY/deferring I wrote another comment below that touched on deference, though I wrote it more quickly than carefully and I might have used the concept in a confused way as a I don't have much formal understanding of deference outside of EA, so don't take my word for it. How deference as a concept has been used in EA differently in the last year has seemed ambiguous to me, so I'm inclined to agree that progress in EA in understanding deference could be made through your challenge to the current understanding of the subject.
2
Sam Clarke
1y
Thanks for your comment! I agree that the concept of deference used in this community is somewhat unclear, and a separate comment exchange on this post further convinced me of this. It's interesting to know how the word is used in formal epistemology. Here is the EA Forum topic entry on epistemic deference. I think it most closely resembles your (c). I agree there's the complicated question of what your priors should be, before you do any deference, which leads to the (b) / (c) distinction.

However, even if we'd show that the repugnance of the repugnant conclusion is influenced in these ways or even rendered unreliable, I doubt the same would be true for the "very repugnant conclusion":

for any world A with billions of happy people living wonderful lives, there is a world Z+ containing both a vast amount of mildly-satisfied lizards and billions of suffering people, such that Z+ is better than A.

(Credit to joe carlsmith who mentioned this on some podcast)

2
Luca Stocco
2y
You raised some interesting points! It seems plausible that the framing effect could be at play here, and that different people would draw the line between a life that's worth living and one that's not at different points. I don't know about any literature about this, but maybe I'd give a look at the Happier Lives Institute's work. And I'll need to think more seriously about the very repugnant conclusion. That's a tough one!

Thanks for the post!

I'm particularly interested in the third objection you present - that the value of "lives barely worth living" may be underrated.

I wonder to what extent the intuition that world Z is bad compared to A is influenced by framing effects. For instance, if I think of "lives net positive but not by much", or something similar, this seems much more valueable than "lives barely worth living", allthough it means the same in population ethics (as I understand it).

I'm also sympathetic to the claim that ones response to world Z may be affected by o... (read more)

4
aaron_mai
2y
However, even if we'd show that the repugnance of the repugnant conclusion is influenced in these ways or even rendered unreliable, I doubt the same would be true for the "very repugnant conclusion": for any world A with billions of happy people living wonderful lives, there is a world Z+ containing both a vast amount of mildly-satisfied lizards and billions of suffering people, such that Z+ is better than A. (Credit to joe carlsmith who mentioned this on some podcast)

I agree that it seems like a good idea to get somewhat familiar with that literature if we want to translate "longtermism" well.

I think I wouldn't use "Langzeitethik" as this suggests, as you say, that longtermism is a field of research. In my mind, "longtermism" typically refers to a set of ethical views or a group of people/institutions. Probably people sometimes use the term to refer to a research field, but my impression is that this rather rare. Is that correct? :)

Also, I think that a new term - like "Befürworter der Langzeitverantwortung" - which is ... (read more)

1
constructive
2y
I agree. I think it's interesting that the field of "Zukunftsethik" exist but I wouldn't use the term as a name for a movement

Out of curiosity: how do you adjust for karma inflation? 

2
JP Addison
2y
Relative to average of ~the month. It's not based on standard deviation, just karma/(avg karma).
1
Karthik Tadepalli
2y
I would imagine the forum has a record of the total voting power of all users, which is increasing over time, and the karma can be downscaled by this total.

This seems a bit inaccurate to me in a few ways, but I'm unsure how accurate we want to be here.

First, when the entry talks about "consequentialism" it seems to identify it with a decision procedure:  "Consequentialists are supposed to estimate all of the effects of their actions, and then add them up appropriately". In the literature, there is usually a distinction made between consequentialism as a criterion of rightness and a decision procedure, and it seems to me like many endorse the latter and not the former. 

Secondly, it seems to identify ... (read more)

Red team: is it actually rational to have imprecise credences in possible longrun/indirect effects of our actions rather than precise ones?

Why: my understanding from Greaves (2016) and Mogensen (2020) is that this has been necessary to argue for the cluelessness worry.

2
MichaelStJules
2y
This came up here. This paper was mentioned. Imo, there are more important things than ensuring you can't be Dutch booked like (having justified beliefs and avoiding fanaticism). Also, Dutch books are hard to guarantee against with unbounded preferences, anyway.

Thanks! :) And great to hear that you are working on a documentary film for EA, excited to see that!

Re: EA-aligned Movies and Documentaries 

I happen to know a well-established documentary filmmaker, whos areas of interest overlap with EA topics. I want to pitch him to work on a movie about x-risks. Do you have any further info about the kinds of documentaries you'd like to fund? Anything that's not obvious from the website.  

2
Scott Mortensen
2y
Wish you guys luck! I am a filmmaker as well, working on Web3  and a documentary film for EA. Hope we somehow connect down the line. Serious serendipity going on right now with networking and resources. Time to influence the culture for the better for sure.

Hey! I wonder how flexible the starting date is. My semester ends mid-July, so I couldn't start before. This is probably the case for most students from Germany. Is that too late?

2
Chi
2y
Thanks for asking! We would definitely consider later starts if people aren't available earlier and I would be surprised if we rejected a strong candidate just on the basis that they are only available a month later. There's some chance we would shorten the default fellowship length (not necessarily by the same number of weeks that they would start later) for them, though but we would discuss this with them first. I think if they would only accept the fellowship if it starts later and is the original 9 weeks long, this would increase the threshold for accepting them somewhat, but again, I would be surprised if we rejected a very strong candidate just on the basis. (I think it would only matter for edge cases.) It also depends a bit on what other applications we get: E.g. if we get many strong applications for Germans who can only start later, we would probably be much more happy to accommodate all of them.

Thanks for the post!

Does this apply at all to undergrads or graduate students who haven't  published any research yet?

 

2
FJehn
2y
This is a bit harder, as awards are usually given for a specific piece of research and as long as you haven't produced anything, you cannot get an award. However, there are some opportunities. For example, on conferences there are often things like poster awards for work in progress research you can participate in. 



There is a German EA Podcast that Lia Rodehorst and I created, called "Gutes Einfach Tun". 
Here is the link.

Also, Sarah Emminghaus recently launched a German EA Podcast called "WirklichGut" (link here).

Hey Pablo,

Thanks a lot for the answer, I appreciate you taking the time! I think I now have a much better idea of how these calculations work (and much more skeptical tbh because there are so many effects which are not captured in the expected value calculations that might make a big difference).

Also thanks for the link to Holdens post!

2
pmelchor
2y
There is no perfect calculation of all the effects of a program but I think GiveWell's effort is impressive (and, as far as I can tell, unmatched in terms of rigor). I think the highest value is in the ability to differentiate top programs from the rest, even if the figures are imperfect.

Hi Johannes!

I appreciate you taking the time.

"Linch's comment on FP funding is roughly right, for FP it is more that a lot of FP members do not have liquidity yet"

I see, my mistake! But is my estimate sufficiently off to overturn my conclusion?

" There were also lots of other external experts consulted." 

Great! Do you agree that it would be useful to make this public? 

"There isn't, as of now, an agreed-to-methodology on how to evaluate advocacy charities, you can't hire an expert for this." 

And the same ist true for evaluating cost-effectiven... (read more)

"The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn't necessarily delve into those or see if they had been interpreted correctly. "

>> Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pret

... (read more)
2
MichaelPlant
2y
I can't immediately remember where I've seen this discussed before, but I concerned I've heard raised is that's it's quite hard to find people who (1) know enough about what you're doing to evaluate your work but (2) are not already in the EA world.  Hmm. Well, I think you'd have to be quite a big and well funded organisation to do that. It would be a lot of management time to set up and run a competition, one which wouldn't obviously be that useful (in terms of the value of information, such a competition is more valuable the worse you think your research is). I can see organisations quite reasonably thinking this wouldn't be a good staff priority vs other things. I'd be interested to know if this has happened elsewhere and how impactful it had been.  That's right. People who were suspicious of your research would be unlikely to have much confidence in the assessment of someone you paid.

Hi Michael!

"You only mention Founders Pledge, which, to me, implies you think Founders Pledge don't get external reviews but other EA orgs do."

> No, I don't think this, but I should have made this clearer. I focused on FP, because I happened to know that they didn't have an external, expert review on one of their main climate-charity recommendations, CATF and because I couldn't find any report on their website about an external, expert review. 
I think my argument here holds for any other similar organisation. 

"This doesn't seem right, because ... (read more)

2
MichaelPlant
2y
Gotcha I mean, how long is a piece of string? :) The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn't necessarily delve into those or see if they had been interpreted correctly.  Re making things public, that's a bit trickier than it sounds. Usually I'd leave a bunch of comments in a google doc as I went, which wouldn't be that easy for a reader to follow. You could ask someone to write a prose evaluation - basically like an academic journal review report - but that's quite a lot more effort and not something I've been asked to do. In HLI, we have asked external academics to do that for us for a couple of pieces of work, and we recognise it's quite a big ask vs just leaving gdoc comments. The people we asked were gracious enough to do it, but they were basically doing us a favour and it's not something we could keep doing (at least with those individuals). I guess one could make them public - we've offered to share ours with donors, but none have asked to see them - but there's something a bit weird about it: it's like you're sending the message "you shouldn't take our word for it, but there's this academic who we've chosen and paid to evaluate us - take their word for it".

I'm not sure, but according to Wikipedia, in total ~3 billion dollars have been pledged via Founders Pledge. Even if that doesn't increase and only 5% of that money is donated according to their recommendations, we are still in the ballpark of around a hundred million USD right? 

On the last question I can only guess as well. So far around 500 million USD have been donated via FoundersPledge. Founders Pledge exists for around 6 years, so on average around 85 million $ per year since it started. It seems likely to me that at least 5% have been allocated... (read more)

2
Linch
2y
I'd be interested in having someone from Founder's Pledge comment. Many EA orgs are in a position where there is a lot of dollars committed but people don't know where to give to so they hold off, hence why the EA movement as a whole has double-digit billions of dollars but only gave ~400M last year.

I actually think there is more needed. 

If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.

I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:

  1. you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have
... (read more)

I'd say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn't be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?

To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and th... (read more)

5
Benjamin_Todd
3y
I like the idea of thinking about it quantitatively like this. I also agree with the second paragraph. One way of thinking about this is that if identifiability is high enough, it can offset low spread. The importance of EA is proportional to the multiple of the degree to which the three premises hold.

Do you still recommend these approaches or has your thinking shifted on any? Personally, I'd be especially interested if you still recommend to "Produce a shallow review of a career path few people are informed about, using the 80,000 Hours framework. ".

Hey, thank you very much for the summary!

I have two questions:

(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?

"All" seems impossible, supposing the set of moral theories is indeed infinite; "whatever you like" seems to justify basically any act by just selecting or inventing the right subset of moral theories; "take the popular ones" seems very limited (admittedly, I dont have an argument against that option, but is there a positive one for it?)

(2)... (read more)

3
MichaelA
4y
(I'll again just provide some thoughts rather than actual, direct answers.) Here I'd again say that I think an analogous question can be asked in the empirical context, and I think it's decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I don't know if we have a rigorous theoretical understanding of how we do that, or of why that's reasonable, or at least of how to do it in general. (I'm not an expert there, though.) And I think there are some types of empirical claims where it's pretty hard to say how we should do this.[1] For some examples I discussed in another post:  * What are the odds that “an all-powerful god” exists? * What are the odds that “ghosts” exist?  * What are the odds that “magic” exists?  What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; I'm just saying it doesn't seem immediately obvious how one does this.) I do think this is all harder in the moral context, but some of the same basic principles may still apply. In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/often update on what other people believe.  I'm not sure if this is how one should do it, but I think it's a common approach, and it's roughly what I've done myself. [1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and it's better to instead talk about degrees of robustness/resilience/trustworthiness of one's probabilities. Very rough sketch: sometimes I might be very
3
MichaelA
4y
Glad you found the post useful :) Yeah, I think those are both very thorny and important questions. I'd guess that no one would have amazing answers to them, but that various other EAs would have somewhat better answers than me. So I'll just make a couple quick comments. I think we could ask an analogous question about how to select which hypotheses about the world/future to use in one's evaluation of the expected value of a given action, or just in evaluating what will happen in future in general. (I.e., in the empirical context, rather than the moral/normative context.) For example, if I want to predict the expected number of readers of an article, I could think about how many readers it'll get if X happens and how many it'll get if Y happens, and then think about how likely X and Y seem. X and Y could be things like "Some unrelated major news event happens to happen on the day of publication, drawing readers away", or "Some major news event that's somewhat related to the topic of the article happens soon-ish after publication, boosting attention", or "The article is featured in some newsletter/roundup." But how many hypotheses should I consider? What about pretty unlikely stuff, like Obama mentioning the article on TV? What about really outlandish stuff that we still can't really assign a probability of precisely 0, like a new religion forming with that article as one of its sacred texts? Now, that response doesn't actually answer the question at all! I don't know how this problem is addressed in the empirical context. But I imagine people have written and thought a bunch about it in that context, and that what they've said could probably be ported over into the moral context. (It's also possible that the analogy breaks down for some reason I haven't considered.)