mariushobbhahn

1794Joined Dec 2020

Bio

I'm currently doing a Ph.D. in ML at the International Max-Planck research school in Tübingen. My focus is on Bayesian ML and I'm exploring its role in AI alignment but I'm also exploring non-Bayesian approaches. I want to become an AI safety researcher/engineer. If you think I should work for you, please reach out.

For more see https://www.mariushobbhahn.com/aboutme/

Comments
71

I'm obviously heavily biased here because I think AI does pose a relevant risk. 

I think the arguments that people made were usually along the lines of "AI will stay controllable; it's just a tool", "We have fixed big problems in the past, we'll fix this one too", "AI just won't be capable enough; it's just hype at the moment and transformer-based systems still have many failure modes", "Improvements in AI are not that fast, so we have enough time to fix them". 

However, I think that most of the dismissive answers are based on vibes rather than sophisticated responses to the arguments made by AI safety folks. 

I don't think these conversations had as much impact as you suggest and I think most of the stuff funded by EA funders has decent EV, i.e. I have more trust in the funding process than you seem to have.  

I think one nice side-effect of this is that I'm now widely known as "the AI safety guy" in parts of the European AIS community and some people have just randomly dropped me a message or started a conversation about it because they were curious.

I was working on different grants in the past but this particular work was not funded. 

I think it's a process and just takes a bit of time. What I mean is roughly "People at some point agreed that there is a problem and asked what could be done to solve it. Then, often they followed up with 'I work on problem X, is there something I could do?'.  And then some of them tried to frame their existing research to make it sound more like AI safety. However, if you point that out, they might consider other paths of contributing more seriously. I expect most people to not make substantial changes to their research though. Habits and incentives are really strong drivers". 

I have talked to Karl about this and we both had similar observations. 

I'm not sure if this is a cultural thing or not but most of the PhDs I talked to came from Europe. I think it also depends on the actor in the government, e.g. I could imagine defense people to be more open to existential risk as a serious threat. I have no experience in governance, so this is highly speculative and I would defer to people with more experience. 

Reflects my experience!

The resources I was unaware of were usually highly specific technical papers (e.g. on some aspect of interpretability), so nothing helpful for a general audience.

Probably not in the first conversation. I think there were multiple cases in which a person thought something like "Interesting argument, I should look at this more" after hearing the X-risk argument and then over time considered it more and more plausible. 

But like I state in the post, I think it's not reasonable to start from X-risks and thus it wasn't the primary focus of most conversations. 

I thought about the topic a bit at some point and my thoughts were

  • The strength of the strong upvote depends on the karma of the user (see other comment)
  • Therefore, the existence of a strong upvote implies that users that have gained more Karma in the past, e.g. because they write better or more content, have more influence on new posts.
  • Thus, the question of the strong upvote seems roughly equivalent to the question "do we want more active/experienced members of the community to have more say?"
  • Personally, I'd say that I currently prefer this system over its alternatives because I think more experienced/active EAs have more nuanced judgment about EA questions. Specifically, I think that there are some posts that fly under the radar because they don't look fancy to newcomers and I want more experienced EAs to be able to strongly upvote those to get more traction.
  • I think strong downvotes are sometimes helpful but I'm not sure how often they are even used. I don't have a strong opinion about their existence. 
  • I can also see that strong votes might lead to a discourse where experienced EAs just give other experienced EAs lots of Karma due to personal connections but most people I know use their strong upvotes based on how important they think the content is and not by how much they like the author. 
  • In conclusion, I think it's good that we give more experienced/active members that have produced high-quality content in the past more say.  I think one can discuss the size of the difference, e.g. maybe the current scale is too liberal or too conservative. 

OK, thanks for the clarification. Didn't know that. 

I agree that wind and solar could lead to more land use if we base our calculations on the efficiency of current or previous solar capabilities. But under the current trend, land use will decrease exponentially as capabilities increase exponentially, so I don't expect it to be a real problem. 

I don't have a full economic model for my claim that the world economy is interconnected but stuff like the supply-chain crisis, or Evergreen provided some evidence in this direction. I think this was not true at the time of the industrial revolution but is now. 

I think it really depends on which kind of environmental constraint we talk about and also how strong the link of that is to GDP in rich nations. If there is a convincing case, I'd obviously change my mind, but for now, I feel like we can address all problems without having to decrease GDP. 

Thanks for the write-up. I upvoted because I think it lays out the arguments clearly and explains them well but I disagree with most of the arguments. 

I will write most of this in more detail in a future post (some of them can already be seen here) but here are the main disagreements:
1. We can decouple way more than we currently do: more value will be created through less resource-intensive activities, e.g. software, services, etc. Absolute decoupling seems impossible but I don't think the current rate of decoupling is anywhere near the realistically achievable limits. 
2. Renewables are the main bottleneck: The energy per dollar for solar has decreased exponentially over the last 10 years and there is no reason it should not continue; the same is true for lithium-ion batteries. The technology is ready (or will be within the next decade) and it seems to be mostly a question of political will to change. Once renewable energy is abundant most other problems seem to be much easier to solve, e.g. protecting biodiversity if you don't need the space for coal mines. 
3. The global economy is interconnected: It is very hard, if not impossible to stop growth in developed countries but keep growth in developing countries. Degrowth in the West most likely implies decreased growth in the developing world, which I oppose. 
4. More growth is required for a stable future path: Most renewable technology has been developed by rich nations. Most efficiency gains in tech have been downstream effects from R&D in rich nations. If we want to get 1000x more efficient green tech, it will likely come from rich countries that pay their scientists from public taxes. In general, many solutions to problems pointed out by degrowthers require a lot of money. A bigger pie means a bigger public R&D budget and more money to spend, e.g. on better education or national parks. 
5. My vision of the future:  I don't think we can scale to infinite value with finite resources. There clearly is a limit at some point but I don't think we have reached it yet. I want to strive toward a world that could host 100B inhabitants powered by solar, hydrogen and nuclear. People live in dense cities with good public transport. People mostly stopped eating meat and vegetarianism drastically reduced land use and problems of pollution. Many problems that exist in the West today are solved in the future, e.g. the infant death rate is not 0.001 (like it is today in the West) it should be 0! I just can't see why the current level of GDP is optimal for some reason and I think we should aim to grow GDP AND solve other problems (and the two are not mutually exclusive or GDP is even necessary for the other). 
6. GDP growth in the west is not a major goal for  EA anyway: I agree with the fact that GDP growth in already rich countries should not be a major goal for EAs. We should aim to solve global problems, many of which are in less developed countries and we should prevent x- and s-risks. Most of these goals are mostly independent of GDP in rich countries.  However, on the margins, I think more GDP in rich countries probably makes it easier to achieve EA goals, e.g. more GDP means a bigger budget for pandemic prevention. Furthermore, I think it would be bad for EAs to support degrowth both because it seems less relevant than other problems and because I just don't think the arguments are true (as described above).

I will publish a slightly more details version of the above arguments and link it here so that you can engage with them more properly. Thank you, once again, for presenting the arguments for degrowth in this clear and non-judgemental way such that people can engage with them on the object level. 

Load More