All of MaxRa's Comments + Replies

This also seems to be true for Germany. From the respective German Doctors Without Borders page:

Beim Verkauf von Wertpapieren fällt in der Regel Kapitalertragsteuer (25% zzgl. Solidaritätszuschlag) an. Als steuerbegünstigt anerkannte gemeinnützige Organisation sind wir von der Körperschaftsteuer und dem Solidaritätszuschlag befreit. Den von uns erzielten Veräußerungserlös Ihrer Wertpapiere können wir ohne Abzug von Körperschaftsteuer und Solidaritätszuschlag in unseren Projekten einsetzen. Somit sparen Sie sich nicht nur den Aufwand eines Verkaufs, es flie

... (read more)
4
Sebastian Schwiecker
Found the LinkedIn Post with some more details: https://www.linkedin.com/posts/felixoldenburg_aufruf-zur-aktienspende-und-zur-klärung-beim-activity-7404418355169943554-29AU/.
6
Sebastian Schwiecker
Yeah, we do. Unfortunately legal situation is quite confusing in Germany. Different financial authorities (Finanzämter) treat it differently (should the donation receipt should show the Anschaffungswert or the Veräußerungswert). IN any case, if you or someone else wants to donate stock, please get in contact with us.

It's great that you already have a rationale prompt for each question. I would probably recommend having one prompt like this at the end, with "(Optional)" in front so experts can share all further thoughts they think might be useful.

Fwiw, I also think the name is a bit complicated, and less memorable than Open Philanthropy. Here the reasoning from the Vox interview:

> Why “coefficient”? As CEO, Alexander Berger, puts it in my conversation with him, “coefficient is a multiplier”: the “co-” nods to collaboration with other givers; the “efficient” is a reminder of the north star of effectiveness.

5
andrewleeke
There are some more details from cG here (also linked in Aaron's post): 

Huge fan of your work, one of the few newsletters I read every week.

Random question, I wonder whether prediction markets are a potentially promising income stream for the team? E.g. Polymarket seems to have a bunch of overlap with the topics you're covering.

Also, thanks for making your news-parsing code open source, was often curious how it looks like under the hood.

4
NunoSempere
I think doing it successfully would take too large a chunk of my time, but I was considering it in case Sentinel fails

Hi Connacher! Thanks for the responses, makes sense.

On your question, one example I often miss from expert surveys is something like this open-ended question: "Do you have any other considerations that would help with understanding this topic?"

I generally agree that quantitative questions are intimately connected with identifying cruxes. Being quantitative about concrete events is a neat way of forcing the experts get more concrete and incentivize them to not get lost in a vague story, etc. But I suspect that often the individual insights from the experts ... (read more)

3[anonymous]
Do you view this as separate from the rationale data we also collect? One low-burden way to do this is to just include something like your text in the rationale prompt.

Thanks for the work, this is great! 

I especially appreciate the rationale summaries, and generally I'd encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates. (E.g. I'm skeptical on experts being sufficiently well calibrated to give particularly informative timeline forecasts).

I'm looking forward to the risk-related surveys. Would be interesting to hear their thoughts on the likelihood of concrete risks. One idea that comes to mind would be conditional forecasts on specific interventions to reduce risks.

Also... (read more)

3[anonymous]
Thanks for digging in! We've gotten similar feedback on "snack-sized" insights and have it on our list. Could you say more on "generally I'd encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates"? I'm not sure I understand what this means in practice, because I think of the two as intimately related. This is likely a product of my view that cruxy questions have a high value of information (some FRI work on this here). In case it's of interest, our risk-focused work tends to be in self-contained projects (example), so we can pull in respondents with intimate knowledge of the risk model. Nevertheless, we'll include some risk questions in future waves. The 2 questions you mention were free text. We asked respondents to list, for example, cognitive limitations of AI. We then created a list of the most common responses to create a resolvable forecasting question for the subsequent wave.

Thanks for the interesting interview!

Fwiw, this section made me feel like it should be thought through more deeply:

Luisa Rodriguez: [...] How confident are you that leaders in the countries that are set up to race and are already racing a little bit are going to see this as close to existential?

Daniel Kokotajlo: I think it will be existential if one side is racing and the other side isn’t. And even if they don’t see that yet, by the time they have superintelligences, then they will see it — because the superintelligences, being superintelligent, will be ab

... (read more)

(Just quick random thoughts.)

The more that Trump is perceived as a liability for the party, the more likely they would go along with an impeachment after a scandal.

  1. Reach out to Republicans in your state about your unhappiness about the recent behavior of the Trump administration.
  2. Financially support investigative reporting on the Trump administration.
  3. Go to protests?
  4. Comment on Twitter? On Truth Social?
    1. It's possibly underrated to write concise and common sense pushback in the Republican Twitter sphere?

I relate hard with the career struggles, thanks for sharing! :') Also very sweet (and again relatable) to drop everything for true love. :3

Thanks for writing this up, I think it's a really useful benchmark for tracking AI capabilities.

One minor feedback point, I feel like instead of reporting on statistical significance in the summary, I'd report on effect sizes, or maybe even better just put the discrimination plots in the summary as they give a very concrete and striking sense of the difference in performance. Statistical significance is affected by how many datapoints you have, which makes lack of a difference especially hard to interpret in terms of how real-world significant the difference is.

Most of my donations were forgone payments for hiring rounds at organizations I consider among the most promising at reducing risks from AI (e.g. Horizon and MIRI).

MaxRa
26
12
1

Thanks for sharing, I really appreciate your committment, and that you announce it.

Fwiw, my immediate reaction is that this type of protest might be a little too soon and it will cause more ridicule and backlash because the general public's and newsmedia's impression is that there is currently no immediate danger. Would be interested in learning more about the timing considerations. Like, I'd imagine that doing this barricading in the aftermath of some concrete harm happening would make favorable reporting for newsmedia much more likely, and then you could steer the discourse towards future and greater harms.

4
Remmelt
Thanks for the kind words! I personally think it would be helpful to put more emphasis on how OpenAI’s reckless scaling and releases of models is already concretely harming ordinary folks (even though no major single accident has shown up yet). Eg. * training on personal/copyrighted data * job losses because of the shoddy replacement of creative workers (and how badly OpenAI has treated workers it paid) * school ‘plagiarism’, disinformation, and deepfakes * environmental harms of scaling compute.  

Cool, thanks for sharing!

We can sponsor US visas for technical roles

Does this apply to any of the roles you list here?

I love the new profiles, and also all the new formatting options for comments, plus the filtering for private notes. Thanks so much! :) 

MaxRa
10
4
0
4

Thanks so much for all your contributions Lizka! :) I really appreciated your presence on the forum, like a friendly, alive, and thoughtful soul that was attending to and helping grow this part of our ecosystem.

  • I can relate to the part about how unthankful it can be to be a mediator... it's a pretty interesting dynamic where something really useful is being disincentivized, would be interested in hearing more of your, or others' thoughts about it.
  • And I feel sorry that your work on moderation had negative effects on your interpersonal relationships. :|
... (read more)

Thanks for doing this work, this seems like a particularly useful benchmark to track the world model of AI systems.

I found it pretty interesting to read the prompts you use, which are quite extensive and give a lot of useful structure to the reasoning. I was surprised to see in table 16 that the zero-shot prompts had almost the same performance level. The prompting kinda introduces a bunch of variance I imagine, and I wonder whether I should expect scaffolding (like https://futuresearch.ai/ are presumable focussing on) to cause significant improvements. 

Thanks, that all makes sense and moderates my optimism a bit, and it feels like we roughly exhausted the depth of my thinking. Sigh... anyways, I'm really thankful and maybe also optimistic for the work that dedicated and strategically thinking people like you have been and will be doing for animals.

  1. That's interesting, based on thinking that animal protein in the end comes from plant protein, and that animals use up a lot of space, food, and extra infrastructure that is not directly involved in turning plant protein into meat, I'd've guessed that plant protein would be much cheaper than animal protein.
    1. I quickly asked chatGPT for the cheapest animal vs. plant proteins in the US:
      Chicken: Approximately 6.6 cents per gram of protein
      Lentils: Approximately 3.7 cents per gram of protein
    2. Less difference than I'd've guessed.
  2. Interesting, hard for me to judge. Re
... (read more)
7
abrahamrowe
Nice, these are great points. On some specifics: 1. I think the other consideration is that for really cheap proteins (corn/soy/wheat), chickens and other animals eat much less processed versions that are cheaper than the ones humans eat. But also people seem to like products made from them less. The novel plant protein inputs are a lot more expensive as far as I can tell. 2. Yeah, I think there is a bunch of uncertainty. My sense of the technical hurdles to cost reduction is that they are fairly large, and I'm not sure they super solvable. But I hope I am wrong! 3. Yeah, this seems possible too. 4. Plus I expect health and climate change angles on meat consumption will also more likely than not steadily increase. 1. I worry these push toward worse animal welfare (less eating of cows, more eating of chicken/fish), not better.  

Really interesting, thanks for sharing. I was particularly surprised about your changes of mind here:

We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 205075%10%-65%
We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 210085%15%-70%

E.g. some spontaneous potential cruxes that might be interesting to hear your disagreement with, in case they capture your reasons for pessimism:

  1. Plant protein sources will become price- and taste-competitive with more than half of al
... (read more)
8
abrahamrowe
Nice, these are good questions, but probably don't capture all the cruxes in my view. 1. I think this seems moderately unlikely to me? I'm not sure what would drive down prices further than where they are now, as it seems like a large portion of the cost are the proteins themselves, and not production. 2. This also seems like it relies on crossing technological hurdles that are really hard. 3. I think this seems possible? But I'd put below 50% on it, and if it does happen, I'd expect something more like the climate movement, where lots of people think it is important but don't really take substantial steps to act on it. 4. I think that reaching 20% vegetarian seems possible in some countries, but I think I'm a lot more skeptical it'll go much higher. I think it does seem plausible to me that there would be a meaningful reduction in the amount of meat consumed over this period in developed countries, but also expect that might come with more chicken/fish consumption that would offset animal welfare gains anyway. I think another crux more important to my pessimism is that I don't feel very convinced that price/taste competitive meat alternatives will cause a significant increase in their adoption.

Thanks for this! :) I unfortunately only had time for skimming but I found the summary of pathways super useful and I had a positive impression of the level of rigor that went into this. Also appreciate the section on downside risks and how to address them.

Thanks so much for sharing your writing, it resonated deeply with me and made me cry more than once.

I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I'm going to stop now.

Noooo, sorry you feel that way. T_T I think you sharing your thinking here is really helpful for the broader EA and good-doer field, and I think it's an unfortunate pattern that online communications quickly feels (or even is) somewhat exhausting and combative.

Just an idea, maybe you would have a much better time doing an interview with e.g. Spencer Greenberg on his Clearer Thinking podcast, or Robert Wiblin on the 80,000 Hours podcast? I feel like they are pretty good interviewers who can ask good questions that make for accurate and informative interviews.

MaxRa
94
45
1
5

(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)

Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.

Jason
17
2
0
2

Yes, although it is likely cheaper (in expected costs) and otherwise superior to make a ~unconditional offer to cover at least the legal fees for would-be speakers. The reason is that an externally legible, credible guarantee of legal-expense coverage ordinarily acts as a strong deterrent to bringing a weak lawsuit in the first place. As implied by my prior comment, one of the main tools in the plaintiff's arsenal is to bully a defendant in a weak case to settle by threatening them with liability for massive legal bills. If you take that tactic way by maki... (read more)

MaxRa
53
10
0
4
4

Meal replacement companies were there for us, through thick and slightly less thick.

https://queal.com/ea

Just in case someone interested in this has not done so yet, I think Zvi‘s post about it was worth reading.

https://thezvi.substack.com/p/openai-the-board-expands

Thanks for your work on this, super interesting!

Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is):

We asked participants when AI will displace humans as the primary force that determines what happens in the future. The concerned group’s median date is 2045 and t

... (read more)

either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is

 

The post states that the skeptics spent 80 hours researching the topics, and were actively engaged with concerened people. For the record, I have probably spent hundreds of hours thinking about the topic, and I think the points they raise are pretty good. These are high quality arguments: you just disagree with them. 

I think this post pretty much refutes the idea that if skeptics just "thought deeply" they would change their minds. It very much comes down to principled disagreement on the object level issues. 

I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.

That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher ... (read more)

Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:

a) relevant markets are simply making an error in neglecting quantified forecasts

  • e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
  • I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and
... (read more)

I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.

My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.

3
CAISID
You are correct in that I was referring more to the natural risks associated with disagreeing with a major funder in a public space (even though OP have a reputation for taking criticism very well), and wasn't referring to friendships. I could well have been more clear, and that's on me.
-4
SuperDuperForecasting
Oh really? Because in typical male-dominated social networks, there are usually pretty high levels of internal disagreement, some of it fairly sharp. Go on any other forum that isn't moderated to within an inch of its life by a team that somehow costs 2 million a year, and where everyone isn't chasing one billionaire's money!
MaxRa
15
2
1
1

I‘m really excited about more thinking and grant-making going into forecasting!

Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:

  1. Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies

  2. In all important domains where humans try to affect things, they are implicitly forecasting all the time a

... (read more)
6
Jason
Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness -- that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input -- that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market). I don't think the drivers of low "societal sanity" are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift's love life is part of a conspiracy to re-elect Biden isn't that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your "team" runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.

Some other relevant responses:

Scott Alexander writes

My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.

Zvi Mowshowitz writes

Even scaling back the misunderstandings, this is what ambition looks like.

It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to

... (read more)
4
SiebeRozendal
Thanks, these are good
MaxRa
21
9
0
20

Thanks a lot for sharing, and for your work supporting his family and for generally helping the people who knew him in processing this loss. I only recently got to know him during the last two EA conferences I attended but he left a strong impression of being a very kind and caring and thoughtful person.

Huh, I actually kinda thought that Open Phil also had a mixed portfolio, just less prominently/extensively than GiveWell. Mostly based on hearling like once or twice that they were in talks with interested UHNW people, and a vague memory of somebody at Open Phil mentioning them being interested in expanding their donors beyond DM&CT... 

Cool!

the article is very fair, perhaps even positive!

Just read the whole thing, wondering whether it gets less positive after the exerpt here. And no, it's all very positive. Thanks you guys for your work, so good to see forecasting gaining momentum.

1
ElliotJDavies
Thanks for sharing this, I had the same question

For example, the fact that it took us more than ten years to seriously consider the option of "slowing down AI" seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.

I'd guess it's also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?

2
David_Althaus
Definitely!

Hmmm, your reply makes me more worried than before that you'll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :')

I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.

I'm not completely sure what you refer to with "legitimate jobs", but I generally have t... (read more)

It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.

Also, the way you frame your pushback makes me worry that you'll loose patience with considerate advocacy way too quickly

I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?

MaxRa
45
11
1
2

Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.

I'm possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:

[Politization] is a real risk that any cause runs when it seeks public attention, and unfortunately I don’t think there’s much we can do to avoid it. Unfortunately, though, AI is going to become politicized whether we get involved in it or not. (I wou

... (read more)

On the discussion that AI will have deficits in expressing care and eliciting trust, I feel like he’s neglecting that AI systems can easily get a digital face and a warm voice for this purpose?

Interesting discussion, thanks! The discussion of AI potentially driving explosive innovations seemed much more relevant than the replacement of the jobs you spent most time discussing, and at the same time unfortunately much more rushed.

But it’s a kind of thing where, you know, I can keep coming up with new bottlenecks [for explosive innovations leading to economic growth], and [Tom Davidson] can keep dismissing them, and we can keep going on forever.

Relatedly, I'd've been interested how Michael relates to the Age of Em scenario, in which IIRC explosive i... (read more)

3
t46
Awesome, thanks Max! Hope you will be able to join us for the conference :)

Hey Kieren :) Thanks, yeah, it was intentional but badly worded on my part. :D I adopted your suggestion.

(Very off-hand and uncharitably phrased and likely misleading reaction to the "Holden vs. hardcore utilitarianism" bit, thought it's just useful enough to quickly share anyways)

  • Holden's and Rob's takes felt a bit like "Hey, we have these confused ideas of infitinies, and then apply it to Utilitarianism and make Utilitarianism confusing ➔ let's throw out Utilitarianism and deprioritize the welfare of future generations relative to what the caring and calculating approach tells us. And maybe even consider becoming nihilists haha, but for real, let's just lea
... (read more)

Fwiw, despite the tournmant feeling like a drag at points, I think I kept at it due to a mix of:
a) I committed to it and wanted to fulfill the committment (which I suppose is conscientiousness),
b) me generally strongly sharing the motivations for having more forecasting, and
c) having the money as a reward for good performance and for just keeping at it.

I was also a participant. I engaged less than I wanted mostly due to the amount of effort this demanded and losing more and more intrinsic motivation. 

Some vague recollections:

  • Everything took more time than expected and that decreased my motivation a bunch
    • E.g. I just saw one note that one pandemic-related initial forecast took me ~90 minutes
    • I think making legible notes requires effort and I invested more time into this than others. 
    • Also reading up on things takes a bunch of time if you're new to a field (I think GPT-4 would've especially helped w
... (read more)

OpenAI lobbied the European Union to argue that GPT-4 is not a ‘high-risk’ system. Regulators assented, meaning that under the current draft of the EU AI Act, key governance requirements would not apply to GPT-4. 

Somebody shared this comment from Politico, which claims that the above article is not an accurate representation:

European lawmakers beg to differ: Both Socialists and Democrats’ Brando Benifei and Renew’s Dragoș Tudorache, who led Parliament’s work on the AI Act, told my colleague Gian Volpicelli that OpenAI never sent them the paper, nor re

... (read more)

A simple analogy to humans applies here: Some of our goals would be easier to attain if we were immortal or omnipotent, but few choose to spend their lives in pursuit of these goals.

I feel like the "fairer" analogy would be optimizing for financial wealth, which is arguably also as close to omnipotence as one can get as a human, and then actually a lot of humans are pursuing this. Further, I might argue that currently money is much more of a bottleneck for people than longevity for ~everyone to pursue their ultimate goals. And for the rare exceptions (maybe something like the wealthiest 10k people?) those people actually do invest a bunch in their personal longevity? I'd guess at least 5% of them?

MaxRa
25
12
0

I spontaneously thought that the EA forum is actually a decentralizing force for EA, where everyone can participate in central discussions.

So I feel like the opposite, making the forum more central to the broader EA space relative to e.g. CEAs internal discussions, would be great for decentralization. And calling it „Zephyr forum“ would just reduce its prominence and relevance.

I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.

IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.

Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren't censors.

Moral stigmatization of AI research would render AI researchers undateable as mates, repulsive as friends, and shameful to family members. Parents would disown adult kids involved in AI. Siblings wouldn’t return their calls. Spouses would divorce them. Landlords wouldn’t rent to them. 

I think such a broad and intense backlash against AI research broadly is extremely unlikely to happen, even if we put all our resources on it.

  • AI is way too broad of a category and the examples of potential downsides of some of its applications (like offputting AI porn or
... (read more)
Load more