Academics will not find a new journal run by non-academics credible, much less prestigious. No one would be able to put this journal on an academic CV. So there's really no benefit to "publishing" relative to posting publicly and letting people vote and comment.
Metformin isn't a supplement though. It's unlikely it would ever get approved as a supplement or OTC, especially given that it has serious side effects.
Really interesting. I appreciate you sharing this and your attitude toward this. Good luck with your career in philosophy - epistemic honesty will take you far.
You might consider cross-posting this on a site like Medium to reach a larger audience.
It's not either/or. It's likely not to be a single disease - would probably be more accurate to call it a syndrome.
I'm not sure how the beliefs in Table 3 would lead to positive social change. Mostly just seems like an increase in some vague theism, along with acceptance/complacency/indifference/nihilism. The former is epistemically shaky, and the latter doesn't seem like an engine for social change.
You might as well randomly go through the list of multimillionaires/billionaires and cold-call them. Maybe not the worst idea, but there's nothing in particular to suggest this guy would be special.
Technology to do something like this is already being developed, but it's not nanotechnology: https://www.nature.com/articles/nmeth.3151
Nanotechnology is rarely the most practical way to probe very small things. People have been able to infer molecular structures since the 19th century. Modern molecular biology/biochemistry makes use of electron microscope, fluorescent microscopy, and sequencing-based assays, among other techniques.
What do you mean by nanoscale neural probes? What are the questions that these probes would answer?
Modeling the risk of psychedelics as nonexistent seems like a very selective reading of Carbonaro 2016:
"Eleven percent put self or others at risk of physical harm; factors increasing the likelihood of risk included estimated dose, duration and difficulty of the experience, and absence of physical comfort and social support. Of the respondents, 2.6% behaved in a physically aggressive or violent manner and 2.7% received medical help. Of those whose experience occurred >1 year before, 7.6% sought treatment for enduring psychological symptoms. Three cases appeared associated with onset of enduring psychotic symptoms and three cases with attempted suicide."
You reveal that you are highly motivated to argue that exterminating humanity is not in the interest of an AI, regardless of whether that statement is true. So your arguments will present weak evidence at best, given your clear bias.
Neither of those statements are upsetting to me.
It's often useful to be able to imagine what will be upsetting to other people and why, even if it's not upsetting to you. Maybe you'll decide that it's worth hurting people, but at least make your decisions with an accurate model of the world. (By the way, "because they're oversensitive" doesn't count as an explanation.)
So let's try to think about why someone might be upset if you told them that they're more likely to be a rapist because of their race. I can think of a few reasons: They feel af...
I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.
This unpleasant tangent is a great example of why applying aggregate statistics to actual people isn't a good strategy. It should be clear why people find the following statements upsetting:
Statistically, there are X rapists in the EA community.
Statistically, as a man/black person/Mexican/non-college grad/Muslim, there is X probability you're a rapist.
Let's please not go down this path.
I would far prefer being raped over a 1% chance of dying immediately. I think the tradeoff would be something like 100,000 to 1.
I don't think most of these will convince people to share your views, often because they come from different moral perspectives. They seem too negative or directly contradictory for people to change their minds - particularly the ones on social justice. However, it might help people understand your personal choices. What have been your results?
I'm a 4th year PhD student in bioinformatics. I've previously considered doing something similar, though I focused more on stem cell technology, which is most relevant to my current research. However, would definitely be interested in discussing further!
I agree with this for the most part, but let's not exclude people from EA who, like me, are low-IQ and high-libido.
It seems that you are vastly underestimating the intensity of psychological trauma that comes with rape.
Even if this is descriptively true (and I think it varies a lot - some people aren't bothered long-term), there's no reason that this is a desirable outcome. Everything is mediated through attitudes.
I'm convinced that most people have an instinctive reaction to sexual violence which involves psychological trauma being triggered automatically.
There's no reason that this should be the case.
Yet, if a child is raped, that's psychologically devastating. The damage can last their whole lives. Explain that.
There are a lot of factors that are difficult to untangle. The ways that adults or peers react can certainly have an influence. I heard one father saying that a sexual abuser "stole his daughter's innocence", or something in a similar vei...
103 - 607 male rapists in EA
False precision much? This seems like an inappropriately specific number - it makes it sound like you have concrete evidence, but in reality you're just multiplying the number of men in EA by 6%. I hope that this number won't start getting spread around.
A more tractable approach to reducing the trauma from sexual violence might be to change perceptions of sexuality. Many people believe that it's important for women to be sexually "pure", which is one reason that female victims experience trauma.
Feminists, to their ...
I also found this stat frustrating. The "A 1:6 ratio means 7 rapes per 6 women on average" stat frustrated me even more--it assumes that EA men are rapists at the base rate of the population at large (probably false), and that every time a rapist rapes someone, if the rapist is an EA, their victim must be an EA too.
I worry that hearing stats like this will cause women to avoid EA, which will then contribute to the imbalanced gender ratio that Kathy has identified as being part of the problem.
Treating Candida via diet isn't accepted science: https://www.mayoclinic.org/healthy-lifestyle/consumer-health/expert-answers/candida-cleanse/faq-20058174
So it's not surprising a doctor wouldn't diagnose you.
I consider GWAS applied, not basic, because it's not mechanistic. Most biologists I've spoken to have a fairly poor opinion of GWAS, as do I. Much of the biological research that gets funded is basic.
The p-value critique doesn't apply to many scientific fields. As far as I can tell, it mostly applies to social science and maybe epidemiological research. In basic biological research, a paper wouldn't be published in a good journal on the basis of a single p-value. In fact, many papers don't have any p-values. When p-values are presented, they're often so low (10^-15) that they're unnecessary confirmations of a clearly visible effect. (Silly, in my opinion.) Most papers rely on many experiments, which ideally provide multiple lines of evidence. It's also...
"The p-value critique doesn't apply to many scientific fields." I agree with this, or at least that it is vastly weaker when overwhelming data are available to pin down results.
"As far as I can tell, it mostly applies to social science and maybe epidemiological research. "
I disagree with this.
For instance, p-value issues have been catastrophic in quantitative genetics. The vast bulk of candidate gene research in genetics was non-replicable p-hacking of radically underpowered studies. E.g. schizophrenia candidate genes replicate at chanc...
I'm a current PhD student in computational biology, so I can offer a perspective on academic research in biology. I agree that biologists aren't optimizing for benefiting humanity - instead, I think high-quality basic research gets the most respect and that academia can't be beat here in most cases.
EAs attempting to do biology outside academia have two options. They can try to circumvent basic research and simply "hack" biology by experimenting with various interventions. However, given the complexity of biological systems, this seems unlikely t...
Where do we draw the line? Is intrinsic abilities an acceptable topic of casual discussion? Do you think it would be humiliating for people who are being discussed as having less intrinsic ability?
I can see 1-3 being problems to some extent (and I don't think Kelly would disagree)... but "overrepresentation of vegetarians and vegans"?? You might as well complain about an overrepresentation of people who donate to charity.
So I think that if you identify with or against some group (e.g. 'anti-SJWs'), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you're inclined to attribute way more to the person's statements than what they're actually saying or you set an overly demanding bar for them to "prove" that what they're saying is correct. And I think all of that is pretty bad for discourse.
This used to be me... It wasn't so much my beliefs that changed (...
But you don't want discrimination hypotheses to be discussed either? I guess that could be an acceptable compromise, to not debate the causes of disparities but at the same time focus on improving diversity in recruitment.
I think there's a bit of an empathy gap in this community. When people are angry for what seems to be no reason, a good first step is to ask whether you've done something that made them feel unsafe/humiliated/demeaned/etc, even if that wasn't your intention. It doesn't take a lot of imagination to see how unsolicited exploration of "other hypotheses" (cough cough) for racial and gender disparities could be very distressing for the people who are being discussed as if they're not there.
Politics is rarely used as an example of a positive environment for women.
It's not just the actual numbers that are concerning (though I disagree with you that a 70% skew can be brushed off). It's the exclusionary behavior within EA.
Thanks Kelly. I agree that this is a problem in EA in ways that people don't realize. In retrospect, I feel stupid for not realizing how casual discussion of IQ and eugenics would be hurtful. Same thing with applying that classic EA skepticism to people's lived experiences.
Culture isn't the main reason I left EA, but it's #3. And I think it contributes to the top two reasons I felt alienated: the mockery of moral views that deviate from strict utilitarianism, and what I believed were naive over-confident tactics.
"Same thing with applying that classic EA skepticism to people's lived experiences"
I suppose this comes down to why the person is sharing their lived experience. If someone is just telling you their story, you want to try and keep an open mind. On the other hand, if someone is sharing their lived experiences in order to make a political argument, a certain amount of criticism, whilst not being unnecessarily insensitive, is fair game.
Humans are generally not evil, just lazy
?
Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.
You're free to offer your own thoughts on the matter, but you seemed to be trying to engage me in a personal debate, which I have no interest in doing. This isn't a clickbait title, I'm not concern trolling, I really have left the EA community. I don't know of any other people who have changed their mind about EA like this, so I thought my story might be of some interest to people. And hey, maybe a few of y'all were wondering where I went.
I don't expect you to convince me to stay.
Maybe I should have said "I'd prefer if you didn't try to convince me to stay". Moral philosophy isn't a huge interest of mine anymore, and I don't really feel like justifying myself on this. I am giving an account of something that happened to me. Not making an argument for what you should believe. I was very careful to say "in my view" for non-trivial claims. I explicitly said "Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me)." So I'm not interested in hearing why prioritizing animals does not necessarily rely on total view utilitarianism.
To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible
I don't think I do anything in my life to the maximal extent possible
That's a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don't think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they'd be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.
EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions,...
"But I think supporting the continuation of humanity and the socialization of the next generation can be considered a pretty basic part of human life."
Maybe it's a good thing at the margins, but we have more than enough people breeding at this point. There's nothing particularly noble about it, anymore than it's noble for an EA to become a sanitation worker. Sure, society would fall apart without sanitation workers, but still...
You're entitled to do what you want with your life, but there's no reason to be smug about it.
This might be alright. See these guidelines though: https://en.wikipedia.org/wiki/Wikipedia:No_original_research#Synthesis_of_published_material
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia
Wikipedia's policies forbid original research. Publishing the research on the organization's website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)
I think this is worth mentioning because I've seen some embarrassing violations of Wikipedia policy on EA-related articles recently.
It feels like telling two rival universities to cut their football programs and donate the savings to AMF. "Everyone wins!"
Anyway, two billion dollars isn't that much in the scheme of things. I remember reading somewhere that Americans spend more money on Halloween candy than politics.
My point was that opiates are extremely pleasurable but I wouldn't want to experience them all the time, even with no consequences. Just sometimes.
"Reducing "existential risk" will of course increase wild animal suffering as well as factory farming, and future equivalents."
Yes, this isn't a novel claim. This is why people who care a lot about wild animal suffering are less likely to work on reducing x risk.
I've had vicodin and china white and sometimes indulge in an oxy. They're quite good, but it hasn't really changed my views on morality. Despite my opiate experience, I'm much less utilitarian than the typical EA.
I agree that points 1 and 2 are unrelated, but I think most people outside EA would agree that a universe of happy bricks is bad. (As I argued in a previous post, it's pretty indistinguishable from a universe of paperclips.) This is one problem that I (and possibly others) have with EA.
I'd be happy if the EA movement became interested in this, just as I'd be happy if the Democratic Party did. But my point was, the label EA means nothing to me. I follow my own views, and it doesn't matter to me what this community thinks of it. Just as you're free to follow your own views, regardless of EA.
Yeah it's confusing because the general description is very vague: do the most good in the world. EAs are often reluctant to be more specific than that. But in practice EAs tend to make arguments from a utilitarian perspective, and the cause areas have been well-defined for a long time: GiveWell recommended charities (typically global health), existential risk (particularly AI), factory farming, and self-improvement (e.g. CFAR). There's nothing terribly wrong with these causes, but I've become interested in violence and poor governance in the developing world. EA just doesn't have much to offer there.
It looks like there might be confounders in the time series because there is a negative "effect" on life satisfaction prior to becoming disabled or unemployed. (With divorce and widowhood it's plausible that some people would see it coming years in advance.)