This is a little old, but it's a similar concept with far higher level of investment: https://www.lesswrong.com/posts/GfRKvER8PWcMj6bbM/sidekick-matchmaking
It has been about 3 years, and only very specific talent still matters for EA now. Earning to Give to institutions is gone, only giving to individuals still makes sense.
It is possible that there will be full scale repleaceability of non-researchers in EA related fields by 2020.
But only if, until then, we keep doing things!
Amanda Askell has interesting thoughts suggestive of using "care" to have a counterfactual meaning. She suggests we think of care as what you would have cared about if you were in a context such that this was a thing you could potentially change. In a way, the distinction is between people who think about "care" in terms of rank "oh, that isn't the thing I most care about" and those who care in terms of absolutes "oh, I think the moral value of this is positive." further complicated by the fact some people are thin...
They need not imply, but I would like a framework where they do under ideal circumstances. In that framework - which I paraphrase from Lewis - if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge).
I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.
...I’m unsure I got your notat
I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism is right, and “just a reason” if it isn't. But if not what is your reason for doing it?
My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but...
I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply "in a not extremely hard to coordinate way"?)
At large I'd say that you are talking about how to be an agenty Moral agent. I'm not sure morality requires being agenty, but it certainly benefits from it.
Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, bu...
Suggestion: Let people talk about any accomplishments, without special emphasis on the month level, or the name of the month.
Some of the moments when people most need to brag is when they need to recover a sense of identity with a self that is more than a month old, that did awesome stuff.
Example: Once upon a time 12 years ago I thought the most good I could do was fixing aging, so I found Aubrey, worked for them for a bit, and won a prize!
A thing I'm proud off is that a few days ago I gave an impromptu speech at Sproul Hall (where free speech started) at Berkeley, about technological improvement and EA, and several people came after to thank me for it.
Agreed with 2 first paragraphs.
Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I'm taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I'm in good position to do that research and some of the time I work on it. But I don't work on it all the time, I would if I got funding for our proposal.
But actually I was referring to a counterfactual wor...
Telofy: Trying to figure out the direction of the inferential gap here. Let me try to explain, I don't promise to succeed.
Aggregative consequentialist utilitarianism holds that people in general should value most minds having the times of their lives, where "in general" here actually translated into a "should" operator. A moral operator. There's a distinction between me wanting X, and morality suggesting, requiring, or demanding X. Even if X is the same, different things can hold a relation to it.
At the moment I both hold a personal p...
It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don't want to do what you want to do, you want to do what you oughtto do.
I don't experience that feeling, so let me reply to your questions:
Wouldn't virtue ethics winning be contradicted by your pulling the lever?
Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn't ...
I'm not claiming this is optimal, but I might be claiming what I'm about to say may be more optimal than anything else that 98% of EAs are actually doing.
There are a couple thousand billionaires on the planet. There are also about as many EAs.
Let's say 500 billionaires are EA friendly under some set of conditions. Then it may well be that the best use of the top 500 EAs is to minutiously study single individual billionaires. Understand their values, where they come from, what makes them tick. Draw their CT-chart, find out their attachment style, persona...
As Luke and Nate would tell you, the shift from researcher to CEO is a hard one to make, even when you want to do good, as Hanson puts it "Yes, thinking is indeed more fun."
I have directed an institute in Brazil before, and that was already somewhat a burden.
The main reason for the high variance though is that setting up an institute requires substantial funding. The people most likely to fundraise would be me, Stephen Frey (who is not on the website), and Daniel, and fundraising is taxing in many ways. Would be great if we had for instance the...
1) Convergence Analysis: The idea here is to create a Berkeley affiliated research institute that operates mainly in two fronts 1)Strategy on the long term future 2)Finding Crucial Considerations that have not been considered or researched yet. We have an interesting group of academics and I would take a mixed position of CEO and researcher.
2) Altruism: past, present, propagation: this is a book whose table of contents I already wrote, and would need further research and spelling out each of the 250 sections I have in mind. It is very different in nature ...
Ok, so this doubles as an open thread?
I would like some light from the EA hivemind. For a while now I have been mostly undecided about what to do with my 2016-2017 period.
Roxanne and I even created a spreadsheet so I could evaluate my potential projects and drop most of them, mid-2015. My goals are basically an oscillating mixture of
1)Making the world better by the most effective means possible.
2)Continuing to live in Berkeley
3)Receive more funding
4)Not stop PHD
5)Use my knowledge and background to do (1).
This has proven an extremely hard decision t...
That sounds about right :)
I like your sincerity. The verbosity is something I actually like and quite praised in the human sciences I was raised in, I don't aim for the condensed information writing style. The nascissism I dislike and tried to fix before, but it's hard, it's a mix of a rigid personality trait with a discomfort from having been in the EA movement since long before it was an actual thing, having spent many years giving time resources and attention, and seeing new EAs who don't have knowledge or competence being rewarded (especially financia...
My experience on Lesswrong indicates that though well intentioned, this would be a terrible policy. The best predictor on Lesswrong if texts of mine would be upvoted or downvoted was wheter someone, in particular username Shminux, would give reasons for their downvote.
There is nothing I dislike or fear more, when I write on Lesswrong than Shminux giving reasons why he's downvoting this time.
Don't get me wrong, write a whole dissertation about what in the content is wrong, or bad, or unformatted, do anything else, but don't say, for instance "Downvoted...
I will write that post once I am finantially secure with some institutional attachment. I think it is too important for me to write while I expect to receive funding as an individual, and don't want people to think "he's saying that because he is not financed by an institution." Also see this.
I think we are falling prey to the transparency fallacy
https://en.wikipedia.org/wiki/Illusion_of_transparency
, the double transparency fallacy,
http://lesswrong.com/lw/ki/double_illusion_of_transparency/
and that there are large inferential gaps in our conversation in both directions.
We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these
1) We talk via skype or hangouts to understand each other's mind. 2...
I'll bite: 1) Transhumanism: The evidence is for the paucity of our knowledge. 2) Status: People are being valued not for the exp value they produce, but by the position they occupy. 3) Analogy: Jargon from Musk, meaning copying and tweaking someone else's idea instead of thinking of a rocket, for instance, from the ground up - follow the chef and cook link. 4) Detonator: Key word was "cling to", they stick with one they had to begin with, demonstrating lack of malleability. 5) Size: The size gives reason to doubt the value of action because to ...
Arguing for Cryo as EA seems to be a bottom line reasoning for me.
I can imagine exceptions. For instance: 1) Mr E.A. is an effective altruist who is super productive and gets most enjoyment out of working, and rests by working even more. Expecting to be an emulation with high likelihood, Mr E. A. decided for cryopreservation to give himself a chance of becoming an emulation coalition which would control large fractions of the EM economy, and use these resources for the EA cause on which society has settled after long and careful thought.
2) Rey Cortzvai...
Thumbs up for this. Creating a labor market for people who are willing to work for causes seems high value to me.
A few years ago, before I spend most, and while Brazil was doing well, I didn't care about money, and as usual I was working and paying my own work out of pocket.
If it was an option then, I would have wanted to work on far future EA, and hedge my bets by asking other people to donate to near future causes on behalf of the work I was doing. I currently lean much more strongly towards far future though, so most of my eggs are in that basket. ...
Off the top of my mind and without consulting people:
Justin Shovelain, Oliver Habryka, Malcolm Ocean, Roxanne Heston, Miranda Dixon-Luinenburg, Steve Rayhawk, Gustavo Rosa, Stephen Frey, Gustavo Bicalho, Steven Kaas, Bastien Stern, Anne Wissemann, and many others.
If they had not received FLI funding: Kaj Sotala, Katja Grace.
If they needed to transition between institutions/countries: Most of the core EA community.
I have mentioned it as an option for a while, but personally waited for less conflict of interest to actually post about it (at the moment...
If you are eager to see the other posts in the series, and would like to help them by commenting, feel free to comment in this google docs which contains all the posts in the series.
The posts are already finished, yet, I highly encourage other EAs to create more posts in that document, or suggest changes. I'm not an economist, I was struck by this idea while writing my book on Altruism, and already spent many hours learning more economics to develop it. The goal is to have actual economists carrying this on to distances I cannot.
The question I would ask then is, if you want to influence larger organization, why not governmental organizations, which have the largest quantities of resources that can be flipped by one individual? If you get a technical position in a public policy related organization, you may be responsible for substantial changes in allocation of resources.
At the end of the day, the metric will always be the same. If you can make the entire red cross more effective, it may be that each unit of your effort was worth it. But if you anticipate more and more donations going to EA recommended charities, then making them even more effective may be more powerful.
See also DavidNash comment.
Except for the purposes of obtaining more epistemic information later on, the general agreement within the EA crowd is that one should invest the vast majority of eggs in one basket, the best basket.
I just want to point out the exact same is the case here, where if someone wants to make a charity more effective, choosing oxfam or the red cross would be a terrible idea, but trying to make AMF, FHI, SCI etc more effective would be a great idea.
Effective altruism is a winners take all kind of thing, where the goal is to make the best better, not to make anyone else be as good as the best.
This piece is a simplified version of an academic article Joao Fabiano and I are writing on the future of evolutionary forces, similar in spirit to this one. It will also be the basis of one of the early chapters of my book Altruism: past, present, propagation. We welcome criticism and suggestions of other forces/constraints/conventions that may be operating to interfere or accelerate the long term evolution of coalitions, cooperation, and global altruistic coordination.
1) I see a trend in the way new EAs concerned about the far future think about where to donate money that seems dangerous, it goes:
I am an EA and care about impactfulness and neglectedness -> Existential risk dominates my considerations -> AI is the most important risk -> Donate to MIRI.
The last step frequently involves very little thought, it borders on a cached thought.
How would you be conceiving of donating your X-risk money at the moment if MIRI did not exist? Which other researchers or organizations should be being scrutinized by donors who are X-risk concerned, and AI persuaded?
1)Which are the implicit assumptions, within MIRI's research agenda, of things that "currently we have absolutely no idea of how to do that, but we are taking this assumption for the time being, and hoping that in the future either a more practical version of this idea will be feasible, or that this version will be a guiding star for practical implementations"?
I mean things like
UDT assumes it's ok for an agent to have a policy ranging over all possible environments and environment histories
The notion of agent used by MIRI assumes to some ex
Some additional related points:
1) Joao Fabiano looked recently into acceptance likelihood for papers in the top 5 philosophical journals. It seems that 3-5% is a reasonable range. It is very hard to publish philosophy papers. It seems to be slightly harder to publish in the top philosophy journals than in Nature, Behavioral and Brain Sciences, or Science magazine, and this is after the filter of 6 positions available for 300 candidates that selects PHD candidates in philosophy (harder than Harvard medicine or economics).
2) Bostrom arguably became very ...
More important than my field not being philosophy anymore (though I have two degrees in philosophy, and identify as a philosopher) the question you could have asked there is why would you want a philosophical audience to begin with? Seems to me there is more low hanging fruits in nearly any other area in terms of people who could become EAs. Philosophers have an easier time doing that, but attracting the top people in econ, literature, visual arts and others who may enjoy reading the occasional public science books is much less replaceable.
I've left the field of philosophy (where I was mostly so I could research what seemed interesting and not what the university wanted, as Chalmers puts it "studying the philosophy of x" where x is what interests me at any time) and am now in biological anthropology. It seems that being a professor in non-philosophy fields is much easier than in philosophy, from my many years researching the topic. Also switching fields between undergrad and grad school is easy, in case someone reading this does not know.
Biological Anthropology, with an adviser whose latest book is in philosophy of mind, the next book on information theory, the previous book on - of all things - biological anthropology, and most of his career was as a semioticist and neuroscientist. My previous adviser was a physicist in the philosophy of physics who turned into a philosopher of mind. My main sources of inspiration are Bostrom and Russell, who defy field borders. So I'm basically studying whatever you convince me makes sense in the intersection of interestingly complex and useful for the world. Except for math, code and decision theory, which are not my comparative advantage, specially not among EAs.
I am not considering what Bostrom/Grace/Besinger/ do philosophy strictu sensu in this question.
After repleaceability considerations have been used at Ben Todd and Will Mac Askill's theses at Oxford, and Nick Beckstead made the philosophical case for the far future, is there still large marginal return to be had on doing research on something that is philosophy strictu sensu?
I ask this because my impression is that after Parfit, Singer, Unger, Ord, Mac Askill and Todd we have run out of efforts that have great consequential impacts in philosophical discour...