All of Siao Si's Comments + Replies

I sometimes downvote comments and posts mostly because I think they have "too much" karma - comments and posts I might upvote or not vote on if they had less karma. As I look at the comment now it has 2 karma with 11 votes - maybe at some point it had more and people voted it back to 2?

I would have downvoted this comment if it had more karma because I think Deborah's comment can be read as antagonistic: "utterly blind", "dire state", "for heaven's sake!", calling people ignorant. In this context I didn't read it this way, but I often vote based on "what would the forum be like if all comments were more like this" rather than "what intentions do I think this person has".

1
Deborah W.A. Foulkes
12d
Thank you for your feedback, text has been revised.

Hi Deborah, I also disagree with this comment (and have disagree voted but not downvoted it). Here are some of my reasons:

  • Without getting too much into it, I think the concerns with the population growth/technological change trend are somewhat distinct from problems relating to the current population size of the earth. One can be concerned that the population replacement rate is dropping too fast while also thinking that the current global population is too large.
  • I think that, while the summarised breakdown you have under the overpopulation project link yo
... (read more)
2
Deborah W.A. Foulkes
12d
Hi Siao Si, thanks for your detailed response. I'll try to address some of your points, though not in the order you state them. Firstly, it is necessary to treat the issues of carrying capacity and optimum human population differently; they are not the same. It is also incorrect to say that many agree that 10 billion is the carrying capacity. The estimate of how many humans earth can support is in flux: on the one hand, technological developments e.g. to improve distribution of resources could extend carrying capacity; on the other hand, the accelerating ecological degradation of the planet (including but not solely due to the climate crisis) is resulting in a shrinkage of the land area able to support crop production and we are currently on a trajectory of collapse in ocean fisheries due to unsustainable fishing practices. Secondly, there is huge variation in experts' estimates of the optimum human population, enabling abundance and flourishing for all - some go as low as only 100,000 humans. See here for different scenarios: https://populationmatters.org/news/2023/05/sustainable-population-the-earth4all-approach/

I think having a separate section for community posts has greatly improved my experience of the forum. However I think there are still quite a lot of posts that stay on the front page for a long time for similar reasons to why community posts did - because they '[interest] everyone at least a little bit' and/or are 'accessible to everyone, or on topics where everyone has an opinion'

I want to see posts that do things like present the results of significant work get more attention, and to a lesser extent posts that are topical - i.e. announcements abo... (read more)

I imagine there could be a useful office in a city with ~20 people using it regularly and ~100 people interested enough in EA to come to some events, and I wouldn't think of that city as an "EA hub".

I also think eg. a London office has much more value than an eg. an Oxford or Cambridge office (although I understand all three to be hubs), even though Oxford and Cambridge have a higher EA-density.

located in an existing hub so that program participants have plenty of people outside the program to interact with

I don't understand this consideration. It seems to me that people located in a place with a more robust existing community are the people that would counterfactually benefit the least from a place to interact with other EAs, because they have plenty of opportunities to do so already.

I'm assuming by "hub" you mean "EA hub", but if by "hub" you mean "a place with high population density/otherwise a lot of people to talk to", then this makes sense... (read more)

2
calebp
3mo
I agree that people in existing EA hubs are more likely to come across others doing high value work than people located outside of hubs. That said, on the current margin, I still think many counterfactually connections happen at office spaces in existing EA hubs. In the context of non residential spaces, I’m not really sure who would use an EA office space outside existing EA hubs so I’m finding the comparison between office in a hub vs office outside a hub a little confusing (whereas with CEEALAR I understand who would use it).

Can you say more precisely what it means for a fund to be recommended? For instance, how should a donor compare giving to one of the "recommended funds" to giving to a specific charity or project directly? (and by extension one of GWWC's new funds over a specific charity)

4
Sjir Hoeijmakers
5mo
We explain more how we view funds vs charities more generally here. And for the GWWC cause area funds we answer your question for each individual fund on their page, e.g. here for the Global Health and Wellbeing Fund, under "How does donating to this fund compare to similar giving opportunities?".

How did you choose the set of evaluators to evaluate -- for instance, why evaluate LTFF and LLF over FP's GCR fund? Were there other evaluators considered for the process but not evaluated?

4
Sjir Hoeijmakers
5mo
Thanks for your question! We explain the general principles we used to choose which evaluator to investigate here, and go into our specific considerations for each evaluator in their evaluation reports. For FP's GCR Fund compared to LTFF and LLF specifically, some of the main considerations were (1) our donors had so far been donating most to the LTFF, so the stakes were higher there, and (2) Longview was one of the most-named options by other effective giving organisations as an evaluator they weren't relying on yet but were interested in learning more about. And yes there are other evaluators we've considered and are considering for future evaluations, some of which we mention throughout the reports. See here for an overview of the impact-focused evaluators making publicly available recommendations that we are currently aware of, and which we may consider in our next iterations of this project.

It's kind of jarring to read that someone has been banned for "violating a norm" - that word to me implies that they're informal agreements between the community. Why not call them "rules"?

tabforacause - a browser extension which shows you ads and directs ad revenue to charity - has launched a way to set GiveDirectly as the charity you want to direct ad revenue to. 

It doesn't raise a lot of money per tab opened, obviously, but I'm not using my newtab page for anything else and find the advertising unobtrusive - its in the corner, not taking up the whole screen - if. you're like me in these respects it could be something to add.

4
Benjamin M.
6mo
Thanks for pointing this out; I'll note that Partners in Health is also available, and GiveWell seems to like them but doesn't think that they beat the GiveWell charity bar, at least when this was written (https://www.givewell.org/international/charities/PIH#:~:text=Partners%20in%20Health%20provides%20comprehensive,network%20of%20community%20health%20workers.). I'd be interested in seeing anything about whether Partners in Health is a better option than GiveDirectly.

How many evaluators typically rate each grant application?

3
Linch
7mo
Right now, ~2-3 

Is there a place to donate to the operations / running of LTFF or the funds in general?

4
Linch
8mo
Not a specific place yet! In the past we've asked specific large donors to cover our costs (both paid grantmaker time and operational expenses), going forwards we'd like to move towards a model where all donors pay a small percentage, but this is not yet enacted. In the meantime, you can make a donation to EA Funds, email us and say you want the donation to be earmarked for operational expenses. :)

I'd think the article you're referencing (link) basically argues against considering "daode" to mean "morality" and vice-versa. 

The abstract: "In contemporary Western moral philosophy literature that discusses the Chinese ethical tradition, it is a commonplace practice to use the Chinese term daode 道德 as a technical translation of the English term moral. The present study provides some empirical evidence showing a discrepancy between the terms moral and daode."

3
Joseph Lemien
8mo
Yes. The idea of English immoral and Chinese bu daode not being quite the same is a big part of the paper.

Hm, I think Hamish's estimate of the cost included a bunch of tinkering with the settings; I can see it going either way. Another thing I think is more important is flexibility to make code changes and iteratively improve - how do you feel notion would do with that? I'm curious to see what you managed to get on Notion if you're willing to talk through it with us.

(BTW, are you the Patrick Liu who participated in the Stampy hackathon this past weekend?)

Answer by Siao SiAug 25, 20233
1
0

I see you already volunteer on aisafety.info! From working on that project these are some areas I think could benefit from being made more accessible (on our platform or otherwise - we’re working on these but definitely could use the help + I would be really happy to see them worked on anywhere)

  • The research agendas and strategies of various alignment orgs and independent researchers
  • AI policy: The proposals that have been made and the content of active and forthcoming policy documents
  • Forecasting: The predictions that have been made and the methods by which
... (read more)

Thanks for pointing this out! I've fixed it in this post and we'll look into checking it automatically

Yes, there's one this weekend. Thanks for flagging the link, I've fixed it now.

Oh I see, thanks! - I didn't realise this because the statement that appears after indicating you've been personally referred is: "Since you were referred to this position, the rest of the application is optional" which makes it sound like it wouldn't be optional if you weren't referred.

I just looked at the application for the role of content specialist for CEA, which seems to involve a lot of working on this forum. 

I noticed that if one indicates they have been personally referred by someone 'involved in effective altruism', one is given the option to skip 'the rest of the application' - which seems like the majority of the substantive information one is asked to give. 

This seems overtly nepotistic, and I can't think of a good reason for it - can anyone give one?

6
David M
9mo
Some reasons could be a) The purpose of the rest of the questions is to inform the initial sift, and not later stages of the application, and if you have been referred by a trusted colleague, then there is no further use of the optional questions to the initial sift, so it would be a waste of applicants’ time b) Saving applicants’ time on the initial application makes you likely to receive more applications to choose from However, these referrals could indeed have a nepotistic effect by allowing networking to have more of an influence on the ease of getting to stage 2. I was referred to apply to this job by someone who was close to another hiring round I was in (where I reached the final stage but didn’t get an offer).
3
Jonathan_Michel
9mo
I can see that this does not feel great from a nepotism angle. However, as Weaver mentions the initial application is only a very rough pre-screening, and for that, a recommendation might tip the scales (and that might be fine). Reasons why this is not a problem: First, expanding on Weavers argument: If the application process is similar to other jobs in the EA world, it will probably involve 2-4  work trials, 1-2 interviews, and potentially an on-site work trial before the final offer is made. The reference maybe gets an applicant over the hurdle of the first written application, but won't be a factor in the evaluation of the work trials and interviews. So it really does not influence their chances too much. Secondly, speaking of how I update on referrals: I don't think most referrals are super strong endorsements by the person referring, and one should not update on them too much. I.e. most referrals are not of the type "I have thought about this for a couple of hours, worked with the person a lot in the last year, and think they will be an excellent fit for this role", but rather "I had a chat with this person, or I know them from somewhere and thought they might have a 5%-10% chance of getting the job so I recommended they apply".  Other reasons why this could be bad: 1. The hiring manager might be slightly biased and keep them in the process longer than they ought to (However, I do not think this would be enough to turn a "not above the bar for hiring" person into the "top three candidate" person). Note that this is also bad for the applicant as they will spend more time on the application process than they should. 2. The applicant might rely too much on the name they put down, and half-ass the rest of the application, but in case the hiring manager does not know the reference, they might be rejected, although their non-half-assed application would have been good. 

The rest of the application seems to be optional also if one indicates that they have not been personally referred by someone. Do you get something different?

https://www.loom.com/share/c0ef87a96a1c4d28bfc0df2e48d7662b 

8
Weaver
9mo
I think that the short hand of "this person vouches for this other person" is a good enough basis for a lot of pre-screening criteria. Not that it makes the person a shoe in for the job, but it's enough to say that you can go by on a referral.  You might say, this is a strange way to pick people, but this is how governments interview people for national security roles. They check references. They ask questions.  I imagine more questions would be asked to the third party who is 'personally referring' the applicant, leading to a slightly different series of interviews anyway. In my experience, people have to work a lot harder to get a job, than to keep one. I know that it's true with everyone that referred me to just about every position. Then if I perform badly it looks poorly on them, but after a certain time, I'm the one referring people onwards, so I have to make my own assessment of if I'm willing to put my reputation on the line.

Should be fixed now, thanks for highlighting. 
There's EA VR - they're listed as inactive but I think there's some activity in their discord. Look forward to seeing you around and feel free to ping anyone with 'EAGather Steward' in their name for a tour :)

2
Ozzie Gooen
1y
Good to know, thanks!

I'm not sure about this particular  case, but I don't think the value of the property increasing over time is a generally good argument for why investments need not be publicly discussed. A lot of potential altruistic spending has benefits that accrue over time, where the benefits of money spent earlier outweighs the benefits of money spent later - as has been discussed extensively when comparing giving now vs. giving later.

The whole premise of EA is that resources should be spent in effective ways, and potential altruistic benefits is no excuse for an ineffective spending of money.

It seems like setting ourselves up for selection bias if we take listen only to people with experience with "how bad journalism gets". We also  want to get advice from people with good experiences with journalism, because they might be doing things that make them more likely to get good experiences, and presumably know about how to continue to go about having good experiences, having gotten them.

There may be some parts of EA where the media don't start out nicely inclined to the area at hand, but I think on many topics we might care to engage with the... (read more)

I think it would be better if agree/disagree voting didn't follow the typical karma rules where different users have different amounts of karma. As it stands I often don't know how many people expressed agreement vs. disagreement, which feels like the information I actually want, and it doesn't make intuitive sense that one forum user might be able to "agree twice as much" as another with a comment.

3
Sharang Phadke
1y
Thanks for the feedback. The tradeoff I see is that it could be valuable for folks to be able to express a strong vs weak opinion. Perhaps what we need is to give a better breakdown of how the votes went?

Perhaps you've seen these things already if you're thinking about having kids, but Julia Wise and Jeff Kaufman have written about their decision to be parents and their experiences parenting extensively. The stuff I could find that addresses the question of making the decision:

Thanks for writing this post! I think promoting diversity in EA is incredibly important and I appreciate your contribution to it.

However, I get a feeling here that you've started with an underlying assumption that "EA should cater to women", which I don't see the argument for. Certainly, if there's a stark lack of women throughout EA, I'd feel that there's a problem that needs to be specifically addressed - but I don't think this is the case. 

You present information about the academic fields that correlate with participation in EA, and note that there... (read more)

5[anonymous]2y
Hi smallsilo,  Thank you for your feedback, it is appreciated.  It is fair to say that a key assumption of the article is EA should cater to everyone, and therefore it should also cater to women.  My central argument is not that there is stark lack of women throughout EA (conversely, I recognise for example that CEA notes that from 2017-2020, their staff gender balance has been roughly equal between women and men). However, there do appear to be a stark lack of women at the front and centre of EA who e.g., write key books. It is also clear that EAs are still disproportionately male.  The main point that I'm hoping to convey is that there are women in EA, but that they are not necessarily being catered for. That is to say, that in theory you could have an EA community comprised 100% of women, but if the content is not cognisant of their needs (i.e., advice is not tailored to them, or research does not consider them, when it ought to), then that in itself is not a good thing. If you do not agree with the assumption that women (and other groups) have specific needs/ considerations, then perhaps that is where our values & assumptions differ.  I agree that the 80k framework isn't suitable for everyone, but in which case I argue that this should be made explicit.  I also agree that some of the issues presented are partially the result of broader, more complicated dynamics elsewhere - but I don't think that is an excuse not to consider or address them. Finally, I fully concur that the demographic data I present on other minorities seems like significant issue (although I would not argue that is it more so). It would be great to see some further writing on those issues.  Thank you for letting me know about magnifymentoring, I hadn't come across them in my searches but I'll take a look and edit if required. 

Thanks for this! It wouldn't have occurred to me to consider the decline of footbinding as a case study of moral progress, 

I think you've probably noted this and perhaps didn't mention it because it's not directly relevant to the main questions you're investigating, but I think it's important to note for someone who only reads this post that having bound feet was a status symbol - it began among the social elite and spread over time to lower social classes, remained a status symbol because families who needed girls to conduct agricultural labor could not partake in the practice, and in practice an incentive to do it was to increase marriage prospects.

3
rosehadshar
2y
Thanks for this point. I'm actually a bit unsure how true it is that the status element of footbinding was important. Certainly that's an established narrative in the literature (e.g. Shepherd buys it). Brown, Bossen and Hill have an article I've only skimmed called 'Marriage Mobility and Footbinding in Pre-1949 Rural China: A Reconsideration of Gender, Economics, and Meaning in Social Causation' (link here: https://www.cambridge.org/core/journals/journal-of-asian-studies/article/marriage-mobility-and-footbinding-in-pre1949-rural-china-a-reconsideration-of-gender-economics-and-meaning-in-social-causation/CF5C5F1E441C5E2BF56BBA8B56F55835), which argues as follows: * "In our sample of 7,314 rural women living in Sichuan, Northern, Central, and Southwestern China in the first half of the twentieth century, two-thirds of women did not marry up. In fact, 22 percent of all women, across regions, married down. In most regions, more women married up than down, but in all regions, the majority did not marry hypergamously. Moreover, for most regions, we found no statistically significant difference between the chances of a footbound girl versus a not-bound girl in marrying into a wealthier household, despite a common cultural belief that footbinding would improve girls' marital prospects." There's an article I haven't read called 'Footbinding, Hypergamy, and Handicraft Labor: Evaluating the Labor Market Explanation of Footbinding', which sounds like it pushes back on these arguments. Link here: https://link.springer.com/article/10.1007/s40806-020-00271-9 Also, I think it's not clear how true it is that "families who needed girls to conduct agricultural labor could not partake": * Many scholars note anecdotal evidence of footbound women working in fields * In Brown and Satterthwaite-Phillips' model, performing agricultural labour is not significant, although girls who did agricultural labour were less likely to be footbound. I can't immediately find a figure for % of

A suggestion that might preserve the value of giving higher karma users more voting power, while addressing some of the concerns: give users with karma the option to +1 a post instead of +2/+3, if they wish.

I think the issue is more that different users have very disparate norms about how often to vote, when to use a strong vote, and what to use it on. My sense (from a combination of noticing voting patterns and reading specific users' comments about how they vote) is that most are pretty low-key about voting, but a few high-karma users are much more intense about it and don't hesitate to throw their weight around. These users can then have a wildly disproportionate effect on discourse because if their vote is worth, say, 7 points, their opinion on one piece ... (read more)

Thanks for writing this up! 

I'm not sure about the implications, but I just want to register that deciding to roll repeatedly, after each roll for a total of n rolls, is not the same as committing to n rolls at the beginning. The latter is equivalent in expected value to rolling every trial at the same time: the former has a much higher expected value. It is still positive, though.

1
Emrik
2y
The cumulative EV of n decisions to roll repeatedly is    A:∑ni=0[u×2i×p]=up∑ni=02i             (where u is the initial utility of 10, and p stays constant at 63−163) whereas the EV of committing  to roll up to n times is    B:upn∑ni=02i  Which is much-much lower than A, as you point out. But then again, for larger values of n, you're very unlikely to be allowed to roll for n times. The EV of n decisions to roll (A) times the probability of getting to the nth roll (pn−1), is      A×pn−1=(up∑ni=02i)×pn−1=upn∑ni=02i=B In other words, A collapses to B if you don't assume any special luck. Which is to say that committing to a strategy has the same EV ex ante as fumbling into the same path unknowingly. This isn't very surprising, I suppose, but the relevancy is that if you have a revealed tendency  to accept one-off St. Petersburg Paradox bets, that tendency has the same expected utility as deliberately committing to accept the same number of SPPs. If the former seems higher, then that's because your expectancy is wrong. More generally, this means that it's important to try to evaluate one-off decisions as clues to what revealed decision rules you have. When you consider making a one-off decision, and that decision seems better than deliberately committing to using the decision-rules that spawned it, for all the times you expect to be in similar situation, then you are fooling yourself and you should update. If you can predict that the cumulatively sum of the EV you assign to each one-off decision individually as you go along will be higher, compared to the EV you'd assign ex ante to the same string of decisions, then something has gone wrong in one your one-off predictions and you should update. I've been puzzling over this comment from time to time, and this has been helpfwly clarifying for me, thank you. I've long been operating like this, but never entirely grokked why as clearly as now.

I wanted to describe my personal experience in case it shifts anyone like me towards applying. I was accepted, received travel support, and went to EAG London last month. 

Initially, I considered the likelihood that I would be accepted and be able to go very low: I didn't think I was involved enough in EA and I didn't think it made sense for me to receive travel support to go as I live very far from London. I also didn't think that I 'deserved' to go: I reasoned that I shouldn't take a spot from someone more engaged in EA or could provide more value to... (read more)

2
Emrik
2y
I think this is one of the reasons EAG (or other ways of informally conversing with regular EAs on EA-related things) can be extremely valuable for people. It lets you get epistemic and emotional feedback on how capable you are compared to a random EAG-sampled slice of the community. People who might have been underconfident (like you) update towards thinking they might be usefwl. That said, I think you're unusually capable, and that a lot of other people will update towards feeling like they're too dumb for EA. But the value of increased confidence in people like you seems higher value than the possible harm caused by people whose confidence drops. And there are reasons to expect online EA material to be a lot more intimidating due to being way more filtered for high-status (incl. smart), so exposure to low-filtered informal conversations in EAG probably causes higher confidence in people who haven't had had a lot of low-filtered informal exposure yet (so if that describes you, reader, you should definitely considering going). Personally, I have a history of feeling like everything I discover and learn is just a form of "catching up" to what everyone else already knows, so talking to people about my ideas has increased my confidence a lot.

Thanks for writing this up!

What are the use cases you envision for terms like these ones?

I appreciate the concern that people might feel deceived when finding out that the movement doesn't look quite like what they were expecting, but I think this might be better addressed by pointing out to new people EA is a broad group with a variety of interests, values, and attitudes.

I'm concerned that splitting up EA according to aesthetics/subcultures might be harmful, and I think it should be handled with care. The human tendency to look for identity labels and sub... (read more)

7
Devin Kalish
2y
Thanks for the comment. I agree with most of this, and think that this is one of the major possible costs of labels like this, but I worry that some of these costs get more attention than the subtler costs that come from failing to label groups like this. Take the label of "Effective Altruism" itself for example, the label does mean that people in the movement might have a tendency to rest easy, knowing that their conformity to certain dogmas is shared by "their people", but not using the label would mean sort of willfully ignoring something big that was actually true to begin with about one's social identity/biases/insularity, and hamper certain types of introspection and social criticism. Even today there are pretty common write ups by people looking to dissolve some aspect of "Effective Altruism" as a group identifier as opposed to a research project or something. This is well meaning, but in my opinion has led to a pretty counterproductive movement-wide motte and bailey often influencing discussions. When selling the movement to others, or defending it from criticism, Effective Altruism is presented as a set of uncontroversial axioms pretty much everyone should agree with, but in practice the way Effective Altruism is discussed and works internally does involve implicit or explicit recognition that the group is centered around a particular network of people and organizations, with their own internal norms, references, and overton window. I think a certain cost like this, if to a lesser extent, comes from failing to label the real cliques and distinct styles of reasoning and approaches to doing good that to some extent polarize the movement. This is particularly the case for some of the factors I discuss in the post, like the fact that different parts of the movement feel vastly more or less welcoming to some people than others, or that large swaths of the movement may feel like a version of "Effective Altruism" you can identify with, and others aren't, and thi

(disclaimer that I talked to Sasha before he put up this post) but as a 'random EA person' I did find reading this clarifying.

It's not that I  believed that "orthogonality thesis the reason why AGI safety is an important cause area", but that I had never thought about the distinction between "no known law relating intelligence and motivations" and "near-0 statistical correlation between intelligence and motivations".  

If I'd otherwise been prompted to think about it, I'd probably have arrived at the former, but I think the latter was rattling around inside my system 1 because the term "orthogonality" brings to mind orthogonal vectors.

I've sometimes thought about if 'immortality' is the right framing, at least for the current moment. Like AllAmericanBreakfast points out, I think that anti-ageing research is unlikely to produce life extensions in the 100x to 1000x range all at once.

In any case, even if we manage to halt ageing entirely, ceteris paribus there will still be deaths from accidents and other causes. A while ago I tried a fermi calculation on this, I think I used this data (United States, 2017). The death rate for people between 15-34 is ~0.1%/year, this rate of death would pu... (read more)

1
Marek Veneny
2y
I'll be sure to check them out, thanks!

Thank you!

Thanks for your list and please do!

I don't think ignoring animal feed makes sense here, I can't find the source at the moment but the vast majority of Peruvian anchoveta is reduced to fish meal and exported to countries like China to serve as feed for land animals and even species of larger fish, the incentive structure is such that factories that are supposed to produce anchoveta derivatives for direct human consumption illegally produce fish meal.

I think increased consumption of fish sauce over other animals would be moving down the food chain and result in a net decrease in animal suffering, not to mention advantageous for fishing-reliant economies there.

1
Vgvt
2y
I was initially going to try to take into account animal feed, using the Faunalytics data on animals killed per food product,  but If I understood the methodology right It looks like they divided the total number of fish killed for feed by the total number of farmed predators (farmed fish +pigs + chickens), so considering I was looking specifically at chickens, I figured It would be better to just mention that rather than try to incorporate it based on flimsy data, If you could point me to good data on standard diets of farmed predators ( Including common but unconventional carnivores like crocodilians , minks, foxes, edible frogs etc) , or estimates of how many animals they are eating, that would be interesting