All of mariushobbhahn's Comments + Replies

The biggest risk of free-spending EA is not optics or motivated cognition, but grift

I think I'm sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters.
a) It's not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money.
b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn't get funding from the FTX foundation, OpenPhil or the LTFF. The in... (read more)

Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power.


I largely agree with this, but I think it's important to keep in mind that "grifter" is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA's the one most prone to "grifting" would end up with more influence than the rest.

What makes this so difficult is that ... (read more)

This matches my personal experience as well.

Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?

How many EAs failed in high risk, high reward projects?

Thanks for sharing. 
I think writing up some of these experiences might be really really valuable, both for your own closure and for others to learn.  I can understand, though, that this is a very tough ask in your current position. 

Calling for Student Submissions: AI Safety Distillation Contest

That sounds very reasonable. Thanks for the swift reply.

Calling for Student Submissions: AI Safety Distillation Contest

Hi, are PhD students also allowed to submit? I would like to submit a distillation and would be fine with not receiving any money in case I win a prize. In case this complicates things too much, I could understand if you don't want that. 

3Aris Richardson1mo
Hi! I’ve been thinking about this a bit more and I do think I want graduate students to be able to submit! However, since the main audience is meant to be undergraduate students, I may have to be harsher in evaluation or, more excitingly, maybe I could create a new tier for graduate students? For now I’d say feel free to submit and I’ll work out more specifics on my end and make an edit (+ reply to this) if I make official changes!
EA Forum's interest in cause-areas over time and other statistics

Thanks for the write-up. If you still have the time, could you increase the font sizes of the labels and replace the figures? If not, don't worry but it's a bit hard to read. It should take 5 minutes or so. 

AI safety starter pack

There is no official place yet. Some people might be working on a project board. See comments in my other post: https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board

Until then, I suggest you join the slack I linked in the post and ask if anyone is currently searching. Additionally, if you are at any of the EAGs and other conferences, I recommend asking around. 

Until we have something more official, projects will likely only be accessible through these informal channels. 

Where would we set up the next EA hubs?

I think this is true for EA orgs but 
a) Some people want to contribute within the academic system
b) Even EA orgs can be constrained by weird academic legal constraints. I think FHI is currently facing some problems along these lines (low confidence, better ask them). 

EA should learn from the Neoliberal movement

Fair, I'll just remove the first sentence. It's too confusing. 

EA should learn from the Neoliberal movement

I think most EAs would agree with most of the claims made in the "what neoliberals believe in" post. Furthermore, the topics that are discussed on the neoliberal podcast often align with the broader political beliefs of EAs, e.g. global free trade is good, people should be allowed to make free choices as long as they don't harm others, one should look at science and history to make decisions, large problems should be prioritized, etc. 

There is a chance that this is just my EA bubble. Let me know if you have further questions. 

EA should learn from the Neoliberal movement

Fair point. Just to clarify, my post is mostly about the NEOLIBERAL PROJECT and not about the neoliberal thinkers. 

7Stefan_Schubert2mo
Well but you start your post with the reference to Vaughan's post about the rise of the neoliberals, and they were not part of the modern neoliberal project (which is much more recent). That may lead people to interpret you as talking about something wider than the modern neoliberal project in particular.
Why randomized controlled trials matter

Thanks for posting it here and for your work at OWID!

Do you have any thoughts on how to scale RCTs to larger, messier projects? By now, the EA community has more resources at their hands and the results for small RCTs might not scale to larger interventions. 
Have you thought of ways in which RCTs could still be leveraged for large-scale interventions or are they just too hard to make work, e.g. on the policy level? 

6salonium2mo
Hey Marius, thank you! I wish I could answer this better, but I don't know enough to have a good answer to how to scale policy RCTs, especially since they're quite different from clinical RCTs (they often can't administer the treatment in a standardised way, there's usually no way to blind participants to what they're receiving, they usually don't track/measure participants as regularly, etc.) Though those are also factors that make them messier in larger projects. I've read this blog post by Michael Clemens, which I found was a useful summary of two books on the topic: https://cgdev.org/blog/scaling-programs-effectively-two-new-books-potential-pitfalls-and-tools-avoid-them [https://cgdev.org/blog/scaling-programs-effectively-two-new-books-potential-pitfalls-and-tools-avoid-them] But I think there are often situations where they can be leveraged for large-scale interventions. A good recent example is this experiment [https://twitter.com/AaronChalfin/status/1504487487770558467?s=20&t=2yQvpKB9i4M0By-uyPgnfQ] on street lighting and its effect in reducing crime. There are some features of the policy make it easier to study at scale. Crime data exists at the right scale (you don't need to track individual participants to find out about crime rates), streetlighting is easy to standardise, you can measure the effects at the level of neighbourhood clusters rather than at the level of individuals. So maybe that's a good way of thinking about how to scale up RCTs - to find treatments and outcomes that are easier to implement and measure at a large scale.
Where would we set up the next EA hubs?

I have a similar intuition as Stefan. The networks effects, governance advantages, etc. seem more important to do effective good fast than how expensive rent is. I think cheap housing might win out for some orgs, e.g. if you can work mostly remote, have a very limited budget and don't require much real-world contact with non-EA institutions. But it feels like this applies to the vast minority of orgs in the status quo. 

There should be an AI safety project board

I think there are multiple reasons:
a) If there is no explicit board people just don't do it because there is no norm and it's work.
b) If you post about your research it might be scooped.
c)  People haven't written up the projects in a sharable format.
d) You might not find the right people on such a board?!

I think there are many failure modes for such a board but it seems worth a try at least. 

I guess most other fields don't have such a board because sharing culture isn't very strong and you're incentivized to be secret and not share to achieve personal goals. 

Where would we set up the next EA hubs?

Thanks for the comment. I'll add it to the post :) 

EA megaprojects continued

One of our suggestions was to buy an existing journal as it might be easier than creating a new one. However, we think that there are a lot of reasons why either option might fail since most problems in academia are likely on a deeper level than journals. I guess other interventions are just much more effective. But I could be persuaded if someone presents good ideas addressing our concerns. 

The Future Fund’s Project Ideas Competition

In case you drew inspiration from some of our suggestions in the megaprojects article, we would like to retroactively apply. 

How to write better blog posts

Then I'd recommend you to start writing and ask people you trust for feedback. This is much less scary than publishing to the entire internet. 

I also think that communities like the EA forum are above average supportive and constructive. If it's clear that you mean well, they will usually give you honest and constructive feedback.

I think your English is completely fine. Don't worry too much about it. Most people, including me, aren't natives ;) 

1Jean M Park3mo
Thanks for your reply! Your attentive attitude motivates me to work on myself and my writing issue ))
Should GMOs (e.g. golden rice) be a cause area?

Now, after the discussion and comments, I tend to agree with your framing.

GMOs just seem to be a waaaay larger topic than I anticipated. It's basically a tool to improve a lot of things. And among the possible application, it seems plausible that some of them are effective enough to be relevant for EAs.

I think there is room for case-by-case stuff like golden rice but also more general advocacy for deregulation, information, increased innovation, etc.

How to write better blog posts

Why would you doubt them? Do you have any evidence for that? Have other people given you that feedback? 

Like I said in the post, it might be easier to start writing with someone more experienced in the beginning.

Overall, I'd like to encourage you to write more for the reasons presented in the posts

2Jean M Park4mo
Thanks for your reply! I believe that the lack of confidence in my ability to write goes back to school. I had a good but strict teacher in literature and language. I once wrote an essay expressing my own thoughts on the topic with the utmost honesty. Well, she gave me a low mark. After that incident, all my writings became devoid of any creative component, I wrote what was needed to get a high grade, and not what I thought. PS I'm not a native speaker, so please excuse my broken English
Should GMOs (e.g. golden rice) be a cause area?

I tend to agree, but it seems like a hard problem to fix. Like I described in the post, you have environmental activists, farmers, the general public and politicians against you in most countries. I'm really not sure what the best path to victory is, but I think we should copy successful strategies of the animal welfare movement. 

I was especially impressed by Leah Garcé's on turning adversaries into allies and assume that similar approaches could work for GMOs, e.g. when talking to farmers. 

2anishazaveri4mo
Interesting example of pro-GMO farmers here [https://allianceforscience.cornell.edu/blog/2019/06/indian-farmers-plant-gmo-seeds-civil-disobedience-satyagraha-protest/]
Should GMOs (e.g. golden rice) be a cause area?

Fully agree, Kevin Esvelt makes a very strong case for this idea in his appearance on Rationally Speaking.  I'll further update the text. 

Should GMOs (e.g. golden rice) be a cause area?

I haven't even thought of this angle but it makes a lot of sense (at least naively)! That probably also increases the importance of fighting GMO resistance in the West, as they are the main market for plant-based meat alternatives atm. 

Should GMOs (e.g. golden rice) be a cause area?

Thankfully, Kat Woods already tagged them on Twitter. Now we just need to hope they use their account ;) I might send them an email if I don't hear back at all. 

7Michael Huang4mo
Charity Entrepreneurship has a report called "Welfare Focused Gene Modification [https://drive.google.com/file/d/136Gio2LgT2w6Wa9mUcItXq24ffiX5fOI/view]" from March 2019 that mentions golden rice and other GMOs, mostly farm animal interventions. The report might be superseded though because it no longer appears on the website. This is an interesting idea from the report: "A 'Good Gene Institute', similar to the Good Food Institute [https://forum.effectivealtruism.org/tag/good-food-institute], that is focused on carefully and thoughtfully building public awareness and interest in individuals getting into the science of genetics-based animal issues."
Should GMOs (e.g. golden rice) be a cause area?

Thanks. I agree with all of that. My section was supposed to be just one of many examples of the wonders that GMOs can produce. I'll clarify the text to state this more clearly :)

Argument Against Impact: EU Is Not an AI Superpower

Thanks for all the numbers. I think putting them into plots would make the case even easier to understand, especially when talking to policymakers and other influential people who get a wall of numbers thrown at them every day. 

If you currently have little time, just taking the most important stat and putting the respective plot on top of the article gets you quite far already. 

EA Analysis of the German Coalition Agreement 2021–2025

I'm not sure Germany is that much of a role model to other countries. I guess the Netherlands and Scandinavian countries might be better suited for that. I think our main message is
a) The new government seems to be more reasonable than past governments from an EA perspective.
b) Given a), Germany could play a larger role in the overall EA sphere since it is pretty important globally and yet there are only very few EA organizations located in Germany or trying to work with the government. 

EA Analysis of the German Coalition Agreement 2021–2025

As weird as this sounds, I would hope that is the reason because it would mean Germany acts for understandable reasons. 
However, my discussions with other Germans and broader public sentiment suggest to me that Germans are insanely pacifistic. Even things like sending troops to stabilize a region when asked by the respective country are seen as critical by many.  https://twitter.com/RikeFranke a German IR researcher/pundit seems to share my belief. Maybe you should check out her twitter.

2Charles Dillon 4mo
That's interesting, and if true a very disappointing and convenient delusion. Thanks!
EA Analysis of the German Coalition Agreement 2021–2025

a) I share that belief to some extent and was initially very skeptical of influencing any government, especially the German one. However, most of my encounters with EAs in politics updated me towards "influence seems easier than I thought". These are all second-hand experiences but include:
- People working in different German ministries detailing how their EA approaches were welcome by their colleagues and shaped some parts of the legislation, e.g. on climate change.
- People working in think tanks saying that people in ministries took their ideas much more... (read more)

4Charles Dillon 4mo
I would think that for energy supply reasons Russia is a much more important partner for Germany than Ukraine, and that entirely explains German reluctance to help Ukraine, do you think this is incorrect?
What is the role of Bayesian ML for AI alignment/safety?

Wow. That was really insightful. 

I can confirm that Philipp is a great supervisor! I also don't plan on chasing the next best thing but want to understand ways to combine Bayesian ML with AI safety/alignment relevant things. 

I'll write you a mail soon!

EA megaprojects continued

I wouldn't read too much into it due to randomness, timing, etc.

But my hunch is that posts are preferred because it provides slightly more value. Rather than having to think of answers yourself or sorting the current answers, you can just skim the headlines. 

A huge opportunity for impact: movement building at top universities

Thanks for the explanation. I didn't know it was this stratified.

A huge opportunity for impact: movement building at top universities

I think movement building is great and support this article entirely. However, I'm not sure about this focus on TOP universities. Maybe this is a German thing where the difference between universities isn't as large as in other countries but even then I find it hard to believe that an EA chapter at a top uni is clearly more impactful than one at a mediocre university. 

If you have limited resources I find it fair to prioritize universities in some way but I'm not sure our ability to predict this very well.  Is there any data on this or has somebod... (read more)

9Alex HT5mo
Thanks for this comment and the discussion it’s generated! I’m afraid I don’t have time to give as detailed response as I would like, but here are some key considerations: * In terms of selecting focus universities, we mentioned our methodology here [https://forum.effectivealtruism.org/posts/GfyuDTHvcwLTDp5fd/cea-update-q1-2021?commentId=kPeXkeTvCwkQ35EJX] (which includes more than just university rankings, such as looking at alumni outcomes like number of politicians, high net worth individuals, and prize winners). * We are supporting other university groups - see my response to Elliot below [https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/a-huge-opportunity-for-impact-movement-building-at-top-2?commentId=Gd4AmmLSZ7CK95czL#comments] for more detail on CEA’s work outside Focus universities. * You can view our two programmes as a ‘high touch’ programme and a ‘medium touch’ programme. We’re currently analysing which programme creates the most highly-engaged EAs per full-time equivalent staff member (FTE) (our org-wide metric). * In the medium term, this is the main model that will likely inform strategic decisions, such as whether to expand the focus university list. However, we don’t think this is particularly decision-relevant for us in the short term. This is because: * At the moment, most of our Focus universities don’t have Campus Specialists. * You don’t need to have gone to a Focus university to be a Campus Specialist. * So we think qualified Campus Specialists won’t be limited by the number of opportunities available.

This isn't a full response to this  comment and its threads, but just so people are aware, we also 

  • Provide enhanced support to a broader set of universities,
  • Make grants to many city and national groups
  • Provide basic funding, advice, and resources to all EA groups.

Additionally, if this program is successful, we will likely expand it to more universities over time.

This post was on one part of our groups work, not all of our groups work. You can see a more complete overview here.

I do worry that the focus on "top" universities is creating a stronger national bias among engaged EAs than we would like.

In particular, because the bar to going to university internationally is higher than attending a domestic university, it means there's a stringency bias in our filters for top talent – it's much more difficult for a German or French person to attend one of these top universities than for a Brit or an American, and so CEA has de facto higher requirements for spending money on community building for people with those nationalities.

I'm not... (read more)

-5ElliotJDavies5mo

For what it's worth, the US higher education system is pretty stratified in terms of intelligence. The best universities are maybe a standard deviation above the 50th best university in SAT scores, and would probably be even higher if the SAT max wasn't 1600; plus, a lot of the most ambitious and potentially successful students go to them. Moreover, top universities generally attract those students from every field; while, for example, UIUC is probably better than most Ivies at CS, the Ivies will still poach a lot of those students largely because of prest... (read more)

EA megaprojects continued

Good catch. We agree and updated it to global catastrophe. 

When to get off the train to crazy town?

what I meant to say with that point is that the tracks never stop, i.e. no matter how crazy an argument seems, there might always be something that seems even crazier. Or from the perspective of the person who is exploring the frontiers, there will always be another interesting question to ask that goes further than the previous one. 

[Discussion] Best intuition pumps for AI safety

So what would you pitch for skeptics look like? Just ask which assumptions they don't buy, rebut and iterate? 

2Rohin Shah7mo
Yup
Thoughts on Personal Finance for Effective Altruists

Thanks for the hint. Fixed it!

Re daylight lamp: exactly right. They aren't even much more expensive than a normal lamp. 

Should Chronic Pain be a cause area?

The conditions we discuss are cluster headaches (similar to OPIS), trigeminal neuralgia, and complex regionary pain syndrome.  We want to emphasize that we are not experts on either of the three but their victims consistently describe extreme pain, unlike anything they have experienced before. 

The reason why we estimate that their treatment might be cost-effective comes partly from the intensity of suffering that could be solved and mostly from its neglectedness. To our knowledge, there are none or very few people who seriously work on them and w... (read more)

Should Chronic Pain be a cause area?

We somehow missed your report on pain initially. We have read it now and added a link to it in the post. I really liked it. Completely our mistake for overlooking it. 

Unfortunately, we can't really help much with the problem you describe with (3). We agree that it's a big problem and we also found that it's not well understood :( 

Carrots, not sticks - What I learned from introducing people to EA

I agree. It's a very intuitive way to introduce people to EA with something they probably already agree with.

How much (physical) suffering is there? Part I: Humans

Thank you very much. Unfortunately the source I'm using (Our World in Data) doesn't report YLLs. Sources that report YLLs are so sparse that I couldn't have used them for an overview. I'm also not sure whether the results I'm drawing here are in any way conclusive or whether DALYs are such a bad metric of suffering that I'm just reading tea leaves. 

3MichaelStJules1y
Hmm, this one [https://vizhub.healthdata.org/gbd-compare/] has Deaths, YLDs and DALYs (among others in the advanced settings), so you could just use YLDs.
How much (physical) suffering is there? Part II: Animals

I understood the numbers to only contain farm fish and no wild fish. 
Thanks for the fact about elephants, I didn't know that. A better metric might then be the number of neurons in the cortex. But it would still contain a lot of uncertainty about which regions of the brain are actually causally responsible for suffering and so on. 

4MichaelStJules1y
This might be of interest: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Sensory-associative_structure [https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Sensory-associative_structure] Some whales/dolphins have more neurons in their cortices than humans. That being said, I'd be reluctant to rely too much on raw counts to decide moral weight. There are many other considerations. Check out Jason Schukraft's work for Rethink Priorities [https://www.rethinkpriorities.org/publications#moralweight].