Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power.
I largely agree with this, but I think it's important to keep in mind that "grifter" is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA's the one most prone to "grifting" would end up with more influence than the rest.
What makes this so difficult is that ... (read more)
if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people's minds than if I were to spend that time writing on the forum.
I also don't know whether this is true, but the general idea that talking to people in person individually would be more persuasive than over text isn't surprising. There's a lower barrier to ideas flowing, you can better see how the other person is responding, and you don't have consider how people not in the conversation might misinterpret you.
the longtermist entrepreneurship incubator still seems like a promising project to me, though difficult to execute.
man you just blew my mind, will give it a try next time I feel an urge to play around with GPT!
If the comments include a prediction my guess is that GPT would often make the same prediction and thus become much more accurate. Not because it learned to predict things but because there's probably a strong correlation between the community prediction and the most upvoted comments prediction.
If the goal is to give GPT more context than just the title of the question, then you could include the descriptions for each question as well, but when I tried this I got worse results (fewer legible predictions).
Open philanthropy is not the only grantmaker in the EA Space! If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made a large shift towards longtermism, primarily due to the Future fund being so massive.
I also want to emphasize that many central EA Organisations are increasingly focused on longtermist concerns, and not as transparent about it as I would like for them to be. People and organisations should not pretend to care about things they do not for the sake of optics. One of EA's most central tenets i... (read more)
If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made a large shift towards longtermism, primarily due to the Future fund being so massive.
I think starting in 2022 this will be true in aggregate – as you say largely because of the FTX Future Fund.
However, for EA Funds specifically, it might be worth keeping in mind that the Global Health and Development Fund has been the largest of the four funds by payout amount, and by received donations even is about as big as all other funds combined.
They are currently explicitly writing on the page I linked that they are not.
Can I apply for funding to the Global Health and Development Fund?
The Global Health and Development Fund is not currently accepting applications for funding.
If that is not the case, I'm not too happy with their communication!
EDIT: whoops, didn't see Lorenzo's comment
The page linked in my comment states that they are not currently accepting unsolicited proposals, but I agree the FAQ makes it sound like they are open to being contacted. My guess is there probably isn't a clear cut policy and that they just want to avoid setting an expectation that they will evaluate everything sent their way.
Will send them a message, thank you :)
When I first learned about the diagnostics startup, my immediate thought was that some EA Fund would be interested in further evaluating it. Unfortunately neither Open Philanthropy, EA Funds, or FTX Community are currently accepting unsolicited proposals.
The primary reason I wrote this post was to get the attention of fund-managers, and hopefully get someone to figure out if this is impactful and fund it if it is.
I wondered about this as well. There's no doubt that it would reduce snakebites, but whether it's cost-effective is more difficult to tell.
An analyst I spoke to pointed out to me that after all it's still pretty rare to be bitten by a snake. The amount of footwear you'd need to distribute per snakebite prevented is pretty high, and likely pretty expensive.
Most purchases I on reflection would prefer not to make are purchases where what I would receive would be worth much more than nothing but still less than the asking price, so I would never actually be compelled to throw out the superfluous stuff I buy.
Many times the purchase would even be worth more than the asking price, but I would like for my preferences to change such that it no longer would be the case.
If a bhikkhu monk can be content owning next to nothing, surely I can be happy owning less than I currently do. The question is how I change my preferences to become more like that of the monk.
Does anyone have advice on getting rid of material desire?
Unlike many I admire I seem to have a much larger desire to buy stuff I don't need. For example I currently feel an overpowering urge to spend $100 on a go board, despite the fact that I little need for one.
I'm not arguing that I have some duty to live frugally due to EA, I just would prefer to be a version of myself that doesn't feel the need to spend money on as much stupid stuff.
good point!
Thanks for this, especially for your point on hedging! If you want to convey your uncertainty, there is no shame in saying "I am not sure about this" before making your claim.
On the topic of good forum writing, a few additional things I try to keep in mind when I write:
It would be great if that same space had an ability to gauge interest and allow people/organisations to post bounties for projects they would like see done.
Ie. Someone from FHI posts a request for a deep-dive on topic X and provides a bounty for whoever does it sufficiently well first. Someone from CSER realizes they also would like to know the answer and adds to the bounty as well. Upvotes instead of bounties could be another way to figure out which projects would be valuable to get done.
tags.tag_types causing you trouble is likely the python namespace giving you issues.
Anyways, I put all of the code into a notebook to make it easier to reproduce. I hope this is close to what you had in mind. Haven't used these things much myself.
https://github.com/MperorM/ea-forum-analysis/blob/main/plots-notebook.ipynb
My guess was that an external document would reduce readership too much to justify. Nevertheless here is a notebook with this post's content and the code:
https://github.com/MperorM/ea-forum-analysis/blob/main/plots-notebook.ipynb
Great question, I took the categories from here:
https://forum.effectivealtruism.org/tags/all
I have just gone off the assumption that whoever categorised the tags on this page, made a good judgement call. I agree completely that particularly longtermist stuff might look like a smaller fraction than it actually is, due to it being split across multiple categories. That said there are posts which fit under multiple longtermist categories which you'd have to ensure is not double-counted.
Thanks for the feedback, will put the code into a notebook when I have time tomorrow, should not take many minutes.
Thanks for the positive feedback! As far as I know there isn't a way to embed plots in the EA Forum, is there something I missed?
I think that is more or less what I'm trying to say!
Think of security at a company. Asking a colleague to show their badge before you let them into the building can be seen as rude. But enforcing this principle is also incredibly important for keeping your premises secure. So many companies have attempted to develop a culture where this is not seen as a rude thing to do, but rather a collective effort to keep the company secure.
Similarly I would think it's positive if we develop some sort of way to say "hey this smells fishy" without it being viewed as a direct attack, but rather someone participating in the collective effort to catch fraud.
I wouldn't worry about it, nothing about your writing in particular. It's not something that caused me any real distress! I think the topic of catching fraud is inherently prone to causing imposter-syndrome, if you often go around feeling like a fraud. You get that vague sense of 'oh no they finally caught me' when you see a post on the topic specifically on the EA Forum.
A central problem is that accusing something of being fraudulant carries an immense cost as its hard to percieve as anything but a direct attack. Whoever did the fraud has every incentive to shut you down and very little to lose which gets very nasty very quickly.
Ideally there would be a low commitment way to accuse someone of fraud that avoids this. Normalising something akin to "This smells fishy to me" and encouraging a culture of not taking it too personally, whenever the hunch turns out wrong might be a first step towards a culture where fraud is caught more quickly.
as a side note, maaaan did this post trigger a strong feeling of imposter syndrome in me!
Great post! I don't have fully formed view of the consequences of EA's increasing focus on longtermism, but I do think it was important that we notice and discuss these trends.
I actually spent some of my last saturday categorising all EA-forum posts by their cause-area[1], and am planning on spending my next saturday making a few graphs over any trends I can spot on the forum.
The reason I wanted to do this is exactly because I had an inclination that global poverty posts were getting comparatively less engagement than it used to, and was wondering whether ... (read more)
I completely agree with this actually. I think concerns over unilaterialist's curse is a great argument in favour of keeping funding central, at least for many areas. I also don't feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.
But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.
I think the unilateralist's curse can be avoided if we make sure to avoid hazardous domains of funding for our experiements to evaluate other types of grantmaking.
Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening pr... (read more)
This is the high-impact opportunity I've been looking for my entire life! I've sold off all my stocks, my house and everything else I own, to maximize my donations to this project.
Looking forward to it! Will it be on Audible?
Yes, it will once launched! (Will is doing the audiobook)
I don't think the EA community is uniquely suited to answer your question. Whether this is a great startup idea or not is difficult for me to figure out and I think speaking to people in the startup and venture-capital community will get you better answers.
I think your counterfactual will likely be much higher if you do direct work, by starting a new EA charity or an effective non-profit than it would be by earning to give. Consider spending some time figuring out exactly what direct work would imply. 80K might be willing to provide advice on that as well.... (read more)
Thank you for asking this question on the forum!
It has been somewhat frustrating to follow you on Facebook and seeing all these great people you were about to interview, without being able to contribute with anything.
Genius idea to red-team (through comments that can provide thoughtful input) the red-team contest itself!
It’s like showing a beautiful, fragile butterfly to your friend to demonstrate the power of flight, only to have them grab it and crush it in their hands, then point to the mangled corpse as proof butterflies not only don’t fly, but can’t fly, look how busted their wings are.
Beautiful analogy, really made the tragedy feel all to real!
Will add 'butterfly idea' to my vocabulary, I hope others do the same.
What exactly would the postdoc be about? Are you and others reasonably confident your research agenda would contribute to the field?
Thanks for this post! It was great to read and learn about a topic about which I know nothing.
My primary reservation, which I'd be curious to get your thoughts on is something like this:
It seems to me that in the abstract there is a finite amount of space available for humans on the planet. Whether that space is taken up by me or some other human being is not too important to me. Similarly to life-extension research it seems to me that brain preservation is spending resources so that people who currently occupy the planet will do so for longer, at the expe... (read more)
Can you comment on why you chose not to release the quantitative model and calculations that you used to derive these conclusions? As detailed as this work is, I don't feel comfortable updating my views based on calculations I can't see for myself.
As you point out in the post, I imagine there is huge variability based on various guesstimates (as there should be!). To me at least, the most valuable part of this work lies in the model and the better understanding we get from attempting to make models, rather than the conclusions of the model.
Mathias, I'm happy to share the full spreadsheet with you or anyone else on request -- just PM me for the link. In addition, anyone can see the basic structure of the model as well as two examples of working through it in our previous piece introducing the framework we used.
Making the model public opens up the possibility of it being shared without the context offered in this article, and I'm hesitant to do that before we have an opportunity to document it much more fully. My hope is that we'll be able to do that for the next iteration.
EDIT: I've decided t... (read more)
It doesn't feel contradictory to me, but I think I see where you're coming from. I hold the following two beliefs which may seem contradictory :
1. Many of the aforementioned blindspots seem like nonsense, and I would be surprised if extensive research in any would produce much of value.
2. At large, people should form and act on their own beliefs rather than differing to what is accepted by some authority.
There's an endless number of things which could turn out to be important. All else equal, EA's should prioritise researching the things which seem the mos... (read more)
To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.
I get a sense of dejavu reading this criticism as I feel I've sixteen variants of this over the years of how EA has psychological problem this, deep nietzschean struggle that and fails to value <author's pet interest>
If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it an... (read more)
To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.
...
If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned... I would even wager that if someone wrote a convincing case for why we should be 'taking dharma seriously', then many would start taking it seriously.
These two bits seem fairly c... (read more)
It would be extremely surprising if all of them were being given the correct amount of attention. (For a start, #10 is vanilla and highly plausible, and while I've heard it before, I've never given it proper attention. #5 should worry us a lot.) Even highly liquid markets don't manage to price everything right all the time, when it comes to weird things.
What would the source of EA's perfect efficiency be? The grantmakers (who openly say that they have a sorta tenuous grasp on impact even in concrete domains)? The perfectly independent reasoning of each EA,... (read more)
‘Many’ was carefully chosen not to imply more than half - perhaps very substantially less, depending on the metric. I started this analysis by assuming a base rate of less than half, but then updated my view based on granular analysis of the proposed Bill language and what we have been able to learn about the theory of change.
Then it seems to me that it would have been just as accurate to say: 'many efforts to improve policy in the last 50 years have succeeded' and conclude the opposite.
I think I disagree with the core premise that animal welfare is an odd one out. That animals have moral worth is a much smaller buy, than the beliefs needed to accept longtermism.
For reference, I think the strongest case for longtermism comes when you accept claims that humanity has a non-zero chance of colonising the universe with digital human beings. It makes perfect sense for me that someone would accept that animals have a high moral worth, but not the far future stuff.
I don't think a justice explanation predicts better why EA's care about animals, than the object level arguments for caring about them do.
A bit tangential, but the video introduction to effective altruism for Christians is really fantastic!
I'm curious to hear how you came to settle on the prize-structure of the contest.
For example, why few but large prizes, as opposed to a pay-per-post model?
What thoughts have you given to the opportunity cost of people you encourage to write blogs?
Happy you finally settled on a name ;)
Whether it would work aside, I don't think this is a very ethical thing to do. It is fundamentally an attempt at deceit, which is to me is the antithesis of what EA is all about.
edit: I think I misunderstood the idea. I read it as an attempt at hijacking a journal to use it as a platform to publish ea research. If it's just buying a journal,and placing a higher emphasis on impactful research I take back my original comment. That said I think there's a very fine line between the former and the latter.
Interestingly both EU, China, and the US are taking large steps to become less reliant on each other for semiconductor production. My guess is that, however strong an argument this is currently, it will be less so in the future.
I wonder to what extent governments will be successful in this effort. Are semiconductor companies easy to replicate or should is it a field where we should expect the current leaders such as ASML (Dutch) to keep lead even after governments pour in money?
Flipping the roles of animals and humans didn't feel particularly clever to me. Who is going to be convinced by this video, who isn't already convinced?
It also focuses entirely on the suffering of wild animals at the hands of humans, which to my knowledge pales in comparison to what we do to farm animals.
I personally find earthlings to be the best video on speciesism. I went from eating meat to not ever wanting to touch meat again in the span of an hour.
To learn illustrator, I created a few posters for EA Denmark:
Global poverty: https://i.imgur.com/SMEUmUE.jpg
Animal welfare: https://i.imgur.com/9aXYVNw.jpg
Existential risk: https://i.imgur.com/6sLdojT.jpg
Longtermism: https://i.imgur.com/PS4Ap8J.jpg
I didn't pay to acquire the rights for the assets I burrowed so they just hang in my room :)
I completely agree that the classification of trajectories should be much more nuanced. I don't think a fast take-off implies these things either. The reason I bundle them together is to create to very distinct scenarios, making for a simpler analysis.
A more thorough analysis would separate these dynamics and analyse all the possible combinations. It would also be better to evaluate EU's institutions separately and analyse more than a single lever influence.
Makes sense, I agree with that sentiment.
Let me know if I misunderstood something or am reading your post uncharitably, but to me this really looks like an attempt at hiding away opinions perceived harmful. I find this line of thinking extremely worrying.
EA should never attempt to hide criticism of itself. I am very much a longtermist and did not think highly of Torres article, but if people read it and think poorly of longtermism then that's fine.
Thinking that hiding criticism can be justifiable because of the enormous stakes, is the exact logic Torres is criticising in the first place!
Framing my proposal as "hiding criticism" is perhaps unduly emotive here. I think that it makes sense to be careful and purposive about what types of content you broadcast to a wider audience which is unlikely to do further research or read particularly critically. I agree with Aaron's comment further down the page where he says that the effect of Torres's piece is to make people feel "icky" about longtermism. Therefore to achieve the ends which I take as implicit in evelynciara's comment (counteract some of the effects of Torres's article and produce a pi... (read more)
Based on you description of the documentary, I wonder to what extent Gates' explanations reflect his actual reasoning. He seems very cautious and filtered, and I doubt an explanation of a boring cost-benefit analysis would make for a good documentary.
Not that I think there necessarily was a good cost-benefit analysis, just that I wouldn't conclude much either way from the documentary.
My own opinion is that it is a double-edged sword
The Council's change on its own weakens the act, and will allow companies to avoid conformity assessments for the exactly the AI systems that will need them the most.
But the new article also makes it possible to impose requirements that will only solely affect general purpose systems, without burdening the development of all other low-risk AI with unnecessary requirements.
Thank you for spotting this that mistake. This is the position I meant to link to, I've replaced the link in the post.
Since writing this article, this is actually one of the things I've been looking into! I think it looks very promising, as many of the issues outlined by WHO seem downstream from people simply being unable to afford high quality antivenom. (ie. why do people choose local healers? Because hospitals cost more and don't help either!)
It also looks like the marginal cost of high quality antivenom would decrease up to an order of magnitude if you scale up production. I have yet to take an in depth look at synthetic antivenom production, but after briefly looking into it, it seems that we are not going to get synthetic antivenom just yet.