Tsunayoshi

477Joined Sep 2019

Comments
64

  1. It's very bad that the movement is focusing outreach on elite universities. Proximity to them should not be a criterion. We should invest in less elitist communities that can make the movement more diverse.

Very bad is a strong statement. Do you mind elaborating on why you think diversity in itself is important, and what kind of diversity you refer to (e.g. diversity of viewpoints, diversity of ethnicity etc.)?  FWIW, Harvard students' ethnic markup differs somewhat from the US population, but not very much so ( once you factor out non residents, the underrepresentation does not seem to exceed a factor of 2.0). 

Nevertheless, it is true that focusing on elite universities is bound to attract students that are in some ways different from the population at large.  However, focusing on them has the benefit on finding ambitious students with comparatively larger chances of impacting the world.

Additionally, elite universities just have a higher proportion of students who are even interested in EA in the first place, so network effects mean that these universities will probably have more fruitful and lively EA student groups. As a local group organizer in Germany, where we do not have elite universities, this difference is palpable. It seems local EA groups in Oxford and London are much more vibrant. 

But ultimately we're here to reduce existential risk or end global poverty or stop factory farming or other important work. Not primarily to make each other happy, especially during work hours

You raise many good points, but I would like to respond to (not necessarily contradict) this sentiment. Of course you are right, those are the goals of the EA community. But by calling this whole thing a community, we cannot help but create certain implicit expectations. Namely, that I will not only be treated  simply as a means to an end. That means only being assessed and valued by how promising I am, how much my counterfactual impact could be, or how much I could help an EA org. That's just being treated as an employee, which is fine for most people, as long as the employer does not call the whole enterprise a community. 

Rather, it vaguely seems to me that people expect communities to reward and value their engaged members, and consider the wellbeing of the members to be important by itself (and not so the members can be e.g. more productive). 

I am not saying this fostering of the community should happen in every EA context, or even at EA globals (maybe a more local context would be more fitting). I am simply saying that if every actor just bluntly considers impact, and at no place community involvement is rewarded, then people are likely and also somewhat justified to feel bitter about the whole community thing. 

Very good post! Some potential tips how people who have similar experiences to what you described can feel more included: 

  1. Replacing visits to the EA Forum with visits to more casual online places: various EA Facebook groups (e.g. EA Hangout, groups related to your cause area of interest), the EA Discord server, probablygood.org (thanks to another commenter mentioning the site).   
  2. Attending events hosted by local EA groups (if close by). These events are in my experience less elite and more communal. 
  3. If attending larger EA conferences, understand that many people behave like they are in a job interview situation (because the community is so small,  reputation as a smart person can be beneficial), and will consequently e.g. avoid asking questions about concepts they do not know. 

AFAIK there is one positive, randomized trial for a nasal spray containing Iota-Carrageenan (Carragelose):  "The incidence of COVID-19 differs significantly between subjects receiving the nasal spray with I-C (2 of 196 [1.0%]) and those receiving placebo (10 of 198 [5.0%]). "  It is available at least in Europe, and in the UK I think under the brand name Dual Defence.  Why it has not received more attention is beyond me. 

Interesting!

Does bullying increase  with onset of adolescence? Schools alone cannot be the factor causing the decrease in life satisfaction, since it seems to occur after grade 5, but students have been in school before that already. 

(Caveat: Due to space and time constraints, this comment aims to state my position and make it somewhat plausible, but not to defend it in depth. Also, I am unsure as to whether the goal of bioethicists is to come up with their own ethical positions, or to synthesize the ethics of the public in a coherent way)  

For most of the post, I draw on decisions made by (bio)ethic committees that advise governments around the world. I believe those are a great basis for doing so, because they are generally staffed by researchers and independent. My cursory searching has found such committees in France and Austria; the members of the Austrian committee are mostly either high ranking bio-ethics professors, or are at least working in the field in some  capacity. Their reports and votes are public. The info for the French members is less transparent. I have not looked into the various US ethic commissions because their appointments seem much more influenced by politics.

You make a great disambiguation of different levels of criticism against "bioethics". The strong version of the view is that bioethicists as academic researchers reach bad conclusions, even compared to the general population.

I believe there is good justification for holding this view. In particular, many of the decisions made by ethic commissions are highly counter-intuitive to me:

  1. Many of the provisions of informed consent differ from what the general public would consider reasonable. For example, in challenge trial protocols, even those created by proponents, payment of participants beyond time compensation was discouraged in order "not to take advantage of the poor". I believe most people would disagree with that (depending on the framing), as would most EA-types.   
  2. The bioethics committee of  Austria explicitly speaks out against surrogate motherhood: "In view of the manifold and complex social, mental and legal problems connected with “surrogate motherhood”, the Bioethics Commission recommends that methods of reproductive medicine be denied to male homosexual couples." (I could not find a poll of the public for Austria, but the public in France is supportive
  3. The commission in France recommends against physician assisted suicide and euthanasia, the commission in Austria recommends only against the latter. (p.61)  
  4. The WHO advisory committee on Covid-19 challenge trials was split on whether it would be ethical to conduct one if there was no available treatment (p.9). Most of the members are however not bioethicists.  
  5. No strong evidence, but in reading these reports I have not seen them actually making a cost-benefit calculation or referring to one. I think doing so would be very unusual. 

If one accepts these decisions as bad, then I do not believe that the defence of institutional dynamics is sufficient to explain them away. The members are not appointed by a politicized process, but seem to just be experts in their field, and certainly not career bureaucrats. 

But they themselves and their decisions are sometimes public, so maybe they fear backlash over some decisions? However often there is a minority opinion advocating for more permissibility, so presumably holding such positions is both possible and does not lead to huge backlash. 

"Moreover, I observe that machine-learning or model-based or data-analysis solutions on forecasting weather, pandemics, supply chain, sales, etc. are happily adopted, and the startups that produce them reach quite high valuations. When trying to explain why prediction markets are not adopted, this makes me favor explanations based on high overhead, low performance and low applicability over Robin Hanson-style explanations based on covert and self-serving status moves." 

I agree that the success of bespoke ml tools for forecasting negates some of the Hansonian explanations, but probably not most of them. 

  1.  As ML tools replace human forecasts, they do not pose a threat to the credibility of executives. They do not have to provide their own forecasts that could later be falsified. 
  2. (Speculative) The forecasts produced by such tools are presumably not visible to every employee, while many previous instances of prediction markets had publicly visible aggregate predictions. 
  3. These tools forecast issues that managers are not traditionally expected to be able to forecast. Weather and pandemics are certainly not in the domain of executives, and I am unsure whether managers usually engage in supply chain and sales predictions.   
  4. These tools do not actually provide answers that could be embarrassing to executives, and for which prediction markets with aggregated human expertise could be useful. For example, machine learning cannot predict "conditional on proposal by CEO Smith, what will our sales be". A good test for this explanation could be how many companies allow feedback to strategy proposals by employees and visible to all employees. 

Thanks for the writeup! This is surely a perspective that we are missing in EA. 

I did not have time to read all of the post, so I am not sure whether you address this: The cost-effectiveness estimates of XR are ex-post, and of just one particular organization. To me it seems obvious, that there are some movements/organizations that achieve great impact through protest, it is more difficult to determine that beforehand.  

So as far as you propose funding existing projects, do you believe that the impact and behaviour of a  movement are stable?  Unlike NGOs, movements seem much more amenable to unforeseen (bottom-up) change, as there is inherently less control over it. How stable do you believe these movements to be?  

They did not have a placebo-receiving control group. 

All the other points you mentioned seem very relevant, but I somewhat disagree with the importance of a placebo control group, when it comes to estimating counterfactual impact. If the control group is assigned to standard of care, they will know they are receiving no treatment and thus not experience any placebo effects (but unlike you write, regression-to-the-mean is still expected in that group), while the treatment group experiences placebo+"real effect from treatment". This makes it difficult to do causal attribution (placebo vs treatment), but otoh it is exactly  what happens in real life when the intervention is rolled out! 

If there is no group psychotherapy, the would-be patients receive standard of care, so they will not experience the placebo effect either. Thus a non-placebo design is estimating precisely what we are considering doing in real life: give an intervention to people, who will know that they are being treated and who would just have received standard of care (in the context of Uganda, this presumably means receiving nothing?).  

Ofc, there are issues with blinding the evaluators; whether StrongMinds has done so is unclear to me. All of your other points seem fairly strong though.

 

You’d also expect that class of people to be more risk-averse, since altruistic returns to money are near-linear on relevant scales at least according to some worldviews, while selfish returns are sharply diminishing (perhaps logarithmic?).

 

It's been a while since I have delved into the topic, so take this with a grain of salt: 

Because of the heavy influence of VCs who follow a hits-based model, startup founders are often forced to aim for 1B+ companies because they lost control of the board, even if they themselves would prefer the higher chances of success with a <1B company.  That is to say, there are more people and startups going for the (close to) linear  utility curve than you would expect based on founders' motivations alone. How strong that effect is  I  cannot say. 

This conflict appears well known, see here for a serious treatment and here for a more humorous  one

Load More