Thanks for sharing.
I think writing up some of these experiences might be really really valuable, both for your own closure and for others to learn. I can understand, though, that this is a very tough ask in your current position.
That sounds very reasonable. Thanks for the swift reply.
Hi, are PhD students also allowed to submit? I would like to submit a distillation and would be fine with not receiving any money in case I win a prize. In case this complicates things too much, I could understand if you don't want that.
Thanks for the write-up. If you still have the time, could you increase the font sizes of the labels and replace the figures? If not, don't worry but it's a bit hard to read. It should take 5 minutes or so.
There is no official place yet. Some people might be working on a project board. See comments in my other post: https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board
Until then, I suggest you join the slack I linked in the post and ask if anyone is currently searching. Additionally, if you are at any of the EAGs and other conferences, I recommend asking around.
Until we have something more official, projects will likely only be accessible through these informal channels.
I think this is true for EA orgs but
a) Some people want to contribute within the academic system
b) Even EA orgs can be constrained by weird academic legal constraints. I think FHI is currently facing some problems along these lines (low confidence, better ask them).
Thanks for pointing that out. Now updated!
Fair, I'll just remove the first sentence. It's too confusing.
I think most EAs would agree with most of the claims made in the "what neoliberals believe in" post. Furthermore, the topics that are discussed on the neoliberal podcast often align with the broader political beliefs of EAs, e.g. global free trade is good, people should be allowed to make free choices as long as they don't harm others, one should look at science and history to make decisions, large problems should be prioritized, etc.
There is a chance that this is just my EA bubble. Let me know if you have further questions.
Fair point. Just to clarify, my post is mostly about the NEOLIBERAL PROJECT and not about the neoliberal thinkers.
Thanks for posting it here and for your work at OWID!
Do you have any thoughts on how to scale RCTs to larger, messier projects? By now, the EA community has more resources at their hands and the results for small RCTs might not scale to larger interventions.
Have you thought of ways in which RCTs could still be leveraged for large-scale interventions or are they just too hard to make work, e.g. on the policy level?
I have a similar intuition as Stefan. The networks effects, governance advantages, etc. seem more important to do effective good fast than how expensive rent is. I think cheap housing might win out for some orgs, e.g. if you can work mostly remote, have a very limited budget and don't require much real-world contact with non-EA institutions. But it feels like this applies to the vast minority of orgs in the status quo.
I think there are multiple reasons:
a) If there is no explicit board people just don't do it because there is no norm and it's work.
b) If you post about your research it might be scooped.
c) People haven't written up the projects in a sharable format.
d) You might not find the right people on such a board?!
I think there are many failure modes for such a board but it seems worth a try at least.
I guess most other fields don't have such a board because sharing culture isn't very strong and you're incentivized to be secret and not share to achieve personal goals.
Thanks for the comment. I'll add it to the post :)
One of our suggestions was to buy an existing journal as it might be easier than creating a new one. However, we think that there are a lot of reasons why either option might fail since most problems in academia are likely on a deeper level than journals. I guess other interventions are just much more effective. But I could be persuaded if someone presents good ideas addressing our concerns.
In case you drew inspiration from some of our suggestions in the megaprojects article, we would like to retroactively apply.
Then I'd recommend you to start writing and ask people you trust for feedback. This is much less scary than publishing to the entire internet.
I also think that communities like the EA forum are above average supportive and constructive. If it's clear that you mean well, they will usually give you honest and constructive feedback.
I think your English is completely fine. Don't worry too much about it. Most people, including me, aren't natives ;)
Now, after the discussion and comments, I tend to agree with your framing.
GMOs just seem to be a waaaay larger topic than I anticipated. It's basically a tool to improve a lot of things. And among the possible application, it seems plausible that some of them are effective enough to be relevant for EAs.
I think there is room for case-by-case stuff like golden rice but also more general advocacy for deregulation, information, increased innovation, etc.
Why would you doubt them? Do you have any evidence for that? Have other people given you that feedback?
Like I said in the post, it might be easier to start writing with someone more experienced in the beginning.
Overall, I'd like to encourage you to write more for the reasons presented in the posts
I tend to agree, but it seems like a hard problem to fix. Like I described in the post, you have environmental activists, farmers, the general public and politicians against you in most countries. I'm really not sure what the best path to victory is, but I think we should copy successful strategies of the animal welfare movement.
I was especially impressed by Leah Garcé's on turning adversaries into allies and assume that similar approaches could work for GMOs, e.g. when talking to farmers.
Fully agree, Kevin Esvelt makes a very strong case for this idea in his appearance on Rationally Speaking. I'll further update the text.
I haven't even thought of this angle but it makes a lot of sense (at least naively)! That probably also increases the importance of fighting GMO resistance in the West, as they are the main market for plant-based meat alternatives atm.
Thankfully, Kat Woods already tagged them on Twitter. Now we just need to hope they use their account ;) I might send them an email if I don't hear back at all.
Thanks. I agree with all of that. My section was supposed to be just one of many examples of the wonders that GMOs can produce. I'll clarify the text to state this more clearly :)
Thanks for all the numbers. I think putting them into plots would make the case even easier to understand, especially when talking to policymakers and other influential people who get a wall of numbers thrown at them every day.
If you currently have little time, just taking the most important stat and putting the respective plot on top of the article gets you quite far already.
I'm not sure Germany is that much of a role model to other countries. I guess the Netherlands and Scandinavian countries might be better suited for that. I think our main message is
a) The new government seems to be more reasonable than past governments from an EA perspective.
b) Given a), Germany could play a larger role in the overall EA sphere since it is pretty important globally and yet there are only very few EA organizations located in Germany or trying to work with the government.
As weird as this sounds, I would hope that is the reason because it would mean Germany acts for understandable reasons.
However, my discussions with other Germans and broader public sentiment suggest to me that Germans are insanely pacifistic. Even things like sending troops to stabilize a region when asked by the respective country are seen as critical by many. https://twitter.com/RikeFranke a German IR researcher/pundit seems to share my belief. Maybe you should check out her twitter.
a) I share that belief to some extent and was initially very skeptical of influencing any government, especially the German one. However, most of my encounters with EAs in politics updated me towards "influence seems easier than I thought". These are all second-hand experiences but include:
- People working in different German ministries detailing how their EA approaches were welcome by their colleagues and shaped some parts of the legislation, e.g. on climate change.
- People working in think tanks saying that people in ministries took their ideas much more... (read more)
Wow. That was really insightful.
I can confirm that Philipp is a great supervisor! I also don't plan on chasing the next best thing but want to understand ways to combine Bayesian ML with AI safety/alignment relevant things.
I'll write you a mail soon!
I wouldn't read too much into it due to randomness, timing, etc.
But my hunch is that posts are preferred because it provides slightly more value. Rather than having to think of answers yourself or sorting the current answers, you can just skim the headlines.
Thanks for the explanation. I didn't know it was this stratified.
I think movement building is great and support this article entirely. However, I'm not sure about this focus on TOP universities. Maybe this is a German thing where the difference between universities isn't as large as in other countries but even then I find it hard to believe that an EA chapter at a top uni is clearly more impactful than one at a mediocre university.
If you have limited resources I find it fair to prioritize universities in some way but I'm not sure our ability to predict this very well. Is there any data on this or has somebod... (read more)
This isn't a full response to this comment and its threads, but just so people are aware, we also
Additionally, if this program is successful, we will likely expand it to more universities over time.
This post was on one part of our groups work, not all of our groups work. You can see a more complete overview here.
I do worry that the focus on "top" universities is creating a stronger national bias among engaged EAs than we would like.
In particular, because the bar to going to university internationally is higher than attending a domestic university, it means there's a stringency bias in our filters for top talent – it's much more difficult for a German or French person to attend one of these top universities than for a Brit or an American, and so CEA has de facto higher requirements for spending money on community building for people with those nationalities.
I'm not... (read more)
For what it's worth, the US higher education system is pretty stratified in terms of intelligence. The best universities are maybe a standard deviation above the 50th best university in SAT scores, and would probably be even higher if the SAT max wasn't 1600; plus, a lot of the most ambitious and potentially successful students go to them. Moreover, top universities generally attract those students from every field; while, for example, UIUC is probably better than most Ivies at CS, the Ivies will still poach a lot of those students largely because of prest... (read more)
Good catch. We agree and updated it to global catastrophe.
what I meant to say with that point is that the tracks never stop, i.e. no matter how crazy an argument seems, there might always be something that seems even crazier. Or from the perspective of the person who is exploring the frontiers, there will always be another interesting question to ask that goes further than the previous one.
Makes sense. I changed it. Thanks!
So what would you pitch for skeptics look like? Just ask which assumptions they don't buy, rebut and iterate?
Thanks for the hint. Fixed it!
Re daylight lamp: exactly right. They aren't even much more expensive than a normal lamp.
no worries :)
The conditions we discuss are cluster headaches (similar to OPIS), trigeminal neuralgia, and complex regionary pain syndrome. We want to emphasize that we are not experts on either of the three but their victims consistently describe extreme pain, unlike anything they have experienced before.
The reason why we estimate that their treatment might be cost-effective comes partly from the intensity of suffering that could be solved and mostly from its neglectedness. To our knowledge, there are none or very few people who seriously work on them and w... (read more)
We somehow missed your report on pain initially. We have read it now and added a link to it in the post. I really liked it. Completely our mistake for overlooking it.
Unfortunately, we can't really help much with the problem you describe with (3). We agree that it's a big problem and we also found that it's not well understood :(
I agree. It's a very intuitive way to introduce people to EA with something they probably already agree with.
Thank you very much. Unfortunately the source I'm using (Our World in Data) doesn't report YLLs. Sources that report YLLs are so sparse that I couldn't have used them for an overview. I'm also not sure whether the results I'm drawing here are in any way conclusive or whether DALYs are such a bad metric of suffering that I'm just reading tea leaves.
I understood the numbers to only contain farm fish and no wild fish.
Thanks for the fact about elephants, I didn't know that. A better metric might then be the number of neurons in the cortex. But it would still contain a lot of uncertainty about which regions of the brain are actually causally responsible for suffering and so on.
I think I'm sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters.
a) It's not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money.
b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn't get funding from the FTX foundation, OpenPhil or the LTFF. The in... (read more)
I largely agree with this, but I think it's important to keep in mind that "grifter" is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA's the one most prone to "grifting" would end up with more influence than the rest.
What makes this so difficult is that ... (read more)
This matches my personal experience as well.
Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?