All of Vasco Grilo🔸's Comments + Replies

Thanks for the reply.

Firstly, it should be noted that the overall ratio used for the 2025 SADs was 1000x not 7x.

Right. There was a weight of 45 % on a ratio of 7.06, and of 55 % on one of 62.8 k (= 3.44*10^6/54.8), 8.90 k (= 62.8*10^3/7.06) times as much. My explanation for the large difference is that very little can be inferred about the intensity of excruciating pain, as defined by the Welfare Footprint Institute (WFI), from the academic studies AIM analysed to derive the pain intensities linked to the lower ratio.

Just as an example this study on 37 wom

... (read more)
2
weeatquince🔸
Thank you Vasco   AGREE ON THERE BEING SOME VALUE FOR MORE RESEARCH I agree AIM 2025 SADS were below ideal robustness and as such I have spent much of the last few weeks doing additional research to improve the pain scaling estimates. If you have time and want to review this then let me know. I would be interested in Rethink Priorities or others doing additional work on this topic.   AGREE ON THE LIMITS OF CONDENSING TO A SINGLE NUMBER I have adapted the 2026 SAD model to give outputs at the four different pain levels, as well as a single aggregated number. This should help users of the model make their own informed decisions and not just focus on the one number.   I DISAGREE ON NOT USING THE RESEARCH WE HAVE Where I disagree is where you say we basically have no idea how to compare different levels of pain, and your suggestion that we should not be doing so.  1. Every time we make a decision EG to focus on an issue of stocking densities rather than slaughter methods, we are ultimately deciding to focus on less extreme but longer lasting forms of pain. Being as explicit as we can about our numbers and our thinking helps us make those decisions better (as long as we don't overly rely on a single number). 2. We do have some data and we should use it to inform our decisions and our numbers. This includes academic studies of people in pain, including those with severe conditions, and self-reports from people who experienced extreme pain.   DISAGREE ON NOT BELIEVING PEOPLE Less important but: I also disagree on your suggestion not to trust the standard academic approach of asking about / people's responses on "worse pain imaginable". Maybe sometimes people overestimate how bad that is sometimes underestimate it. You seem to be claiming these women (or the whole public) are en mass systematically underestimating. That is a strong claim and not one I would put much weight on without good evidence. Yes that are (often known) systematic over and underestimat

Thanks, Wladimir. That makes sense. I look forward to your future work on this. Let me know if funding ever becomes bottleneck, in which case I may want to help with a few k$.

Here is the crosspost on the EA Forum. Rob preferred I shared it myself.

The critical question is whether shrimp or insects can support the kinds of negative states that make suffering severe, rather than merely possible.

I think suffering matters proportionally to its intensity. So I would not neglect mild suffering in principle, although it may not matter much in practice due to contributing little to total expected suffering.

In any case, I would agree the total expected welfare of farmed invertebrates may be tiny compared with that of humans due invertebrates' experiences having a very low intensity. For expected individual w... (read more)

And even granting the usual EA filters—tractability, neglectedness, feasibility, and evidential robustness—the scale gradient from shrimp to insects (via agriculture-related deaths) is so steep that these filters don’t, by themselves, explain why the precautionary logic should settle on shrimp. All else equal, once you shift to a target that is thousands of times larger, an intervention could be far less effective [in terms of robustly increasing welfare in expectation] and still compete on expected impact.

I very much agree. Moreover, I do not even know wh... (read more)

Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.

I agree it is good that no human has power over all humans. H... (read more)

Hi Guy. Elon Musk was not the only person responsible for the recent large cuts in foreign aid from the United States (US). In addition, I believe outcomes like human extinction are way less likely. I agree it makes sense to worry about concentration of power, but not about extreme outcomes like human extinction.

2
Guy Raveh
Extinction perhaps not, but I think eternal autocracy is definitely possible.

Thanks for the relevant post, Wladimir and Cynthia. I strongly upvoted it. Do you have any practical ideas about how to apply the Sentience Bargain framework to compare welfare across species? I would be curious to know your thoughts on Rethink Priorities' (RP's) research agenda on valuing impacts across species.

3
Wladimir J. Alonso
Thanks a lot, Vasco — and thanks for the upvote! You’re absolutely right to push us toward the practical question of how to compare affective capacity across species. That’s ultimately where this line of work needs to go. At the same time, we’ve been deliberately cautious here, because we think this is one of those cases where moving too quickly to numbers or rankings risks making the waters muddier rather than clearer. Our sense is that the comparison of affective capacity across species hinges on a set of upstream scientific questions that are still poorly articulated- especially around when sentience arises at all, and when it plausibly extends to very intense affective states. The aim of this piece was to stress-test a way of structuring those questions before turning them into quantitative tools. That said, we do see this as complementary to RP’s research agenda on valuing impacts across species. In fact, we think cost–benefit reasoning about sentience and affective intensity can help discipline some of the assumptions that go into moral-weight or welfare-capacity estimates, rather than replacing them. We’re currently working on a follow-up that moves closer to a practical comparative framework, and we’re very much treating the present work as groundwork for that. Happy to loop back and share it once it’s ready — and we’d be keen to hear your thoughts then as well.

Thanks for the great post, Lukas. I strongly upvoted it. I also agree with your concluding thoughts and implications.

Thank you all for the very interesting discussion.

I think addressing the greatest sources of suffering is a promising approach to robustly increase welfare. However, I believe the focus should be on the greatest sources of suffering in the ecosystem, not in any given population, such that effects on non-target organisms can be neglected. Electrically stunning farmed shrimps arguably addresses one of the greatest sources of suffering of farmed shrimps, and the ratio between its effects on target and non-target organisms is much larger than for the vast majo... (read more)

Thanks, Zoë. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bob's book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RP's research agenda on valuing impacts across species?

Q: Do Senterra Funders staff decide how funders make grant decisions?

A: No, each Senterra member maintains full autono

... (read more)

Thanks for the great post, Srdjan. I strongly upvoted it.

Fair point, Nick. I would just keep in mind there may be very different types of digital minds, and some types may not speak any human language. We can more easily understand chimps than shrimps. In addition, the types of digital minds driving the expected total welfare might not speak any human language. I think there is a case for keeping an eye out for something like digital soil animals or microorganisms, by which I mean simple AI agents or algorithms, at least for people caring about invertebrate welfare. On the other end of the spectrum, I am also op... (read more)

Thanks for the post, Noah. I strongly upvoted it.

  • 5. How much welfare total capacity might digital minds have relative to humans/other animals
    • a. Related questions include: the estimated scale of digital minds, moral weights-esque projects, which part of the model would have moral weight.

I think this is a very important uncertainty. Discussions of digital minds overwhelmingly focus on the number of individuals, and probability of consciousness or sentience. However, one has to multiply these factors by the expected individual welfare per year conditiona... (read more)

Thanks for sharing, Kevin and Max. Are you planning to do any cost-effectiveness analyses (CEAs) to assess potential grants? I may help with these for free if you are interested.

Global wealth would have to increase a lot for everyone to become billionaire. There are 10 billion people. So everyone being a billionaire would require a global wealth of 10^19 $ (= 10*10^9*1*10^9) for perfect distribution. Global wealth is 600 T$. So it would have to become 16.7 k (= 10^19/(600*10^12)) times as large. For a growth of 10 %/year, it would take 102 years (= LN(16.7*10^3)/LN(1 + 0.10)). For a growth of 30 %/year, it would take 37.1 years (= LN(16.7*10^3)/LN(1 + 0.30)).

I was considering hypothetical scenarios of the type "imagine this offer from MIRI arrived, would a lab accept"

When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.

With your bets about timelines - I did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accur

... (read more)

Agreed, Ben. I encouraged Rob to crosspost it on the EA Forum. Thanks to your comment, I just set up a reminder to ping him again in 7 days in case he has not replied by then.

2
Vasco Grilo🔸
Here is the crosspost on the EA Forum. Rob preferred I shared it myself.

Hi Ruth. I only care about seeking truth to the extent it increases welfare (more happiness, and less pain). I just think applicants optimising for increasing their chances of being funded usually leads to worse decisions, and therefore lower welfare, than them optimising for improving the decisions of the funders. I also do not think there is much of a trade-off between being funded by and improving the decisions of impact-focussed funders, who often value honesty and transparency about the downsides of the project quite highly.

Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?

4
Jan_Kulveit
I was considering hypothetical scenarios of the type "imagine this offer from MIRI arrived, would a lab accept" ; clearly MIRI is not making the offer because the labs don't have good alignment plans and they are obviously high integrity enough to not be corrupted by relatively tiny incentives like $3b I would guess there are ways to operationalise the hypothethicals, and try to have, for example, Dan Hendrycks guess what would xAI do, him being an advisor.  With your bets about timelines - I did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the "confident about short timelines" do you expect I should take. I'm happy to bet on some operationalization of your overall thinking and posting about the topic of AGI being bad, e.g. something like "3 smartest available AIs in 2035 compare all what we wrote in 2026 on EAF, LW and Twitter about AI and judge who was more confused, overconfident and miscalibrated". 

The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.

5
Ben Stevenson
That's a really interesting blog, Vasco. Worth its own post.

Thanks for sharing, Ben. I like the concept. Do you have a target total (time and financial) cost? I wonder what is the ideal ratio between total amount granted and cost for grants of "$670 to $3300".

5
Ben Stevenson
Hey Vasco, thanks. I have also thought about this but don’t have a clear answer. Each credible grant opportunity will get a shallow dive but it’s hard to reliably estimate a total time budget before we have a good sense how many people will apply. Happy to share post-hoc reflections afterwards.

Nice post, Alex. 

Sometimes when there’s a lot of self doubt it’s not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying ‘I don’t know enough’ I say ‘I’m having the thought that I don’t know enough.’ I don’t have to believe or argue with the thought, I can just acknowledge it and return to what I’m doing. 

Clearer Thinking has launched a program to learn cognitive diffusion.

Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_av - P_0)/(P_av - P_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_av - P_0/N)/(P_av - P_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_... (read more)

Thanks for the relevant comment, Nick.

2. I've been generally unimpressed by responses to criticisms of animal sentience. I've rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.

I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" ranging from 0.5 to 1.5, which I believe co... (read more)

Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.

Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.

Hi David!

Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.

Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?

But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous imp

... (read more)

Thanks for the great post, Matthew. I broadly agree.

If we struggle to forecast impacts over mere decades in a data-rich field, then claiming to know what effects a policy will have over billions of years is simply not credible.

I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP)... (read more)

Thanks for crossposting this, Joey.

This is a linkpost for My Career Plan: Launching Elevate Philanthropy!

The above does not link to the original post. You are supposed to type out the URL in the field above.

Despite not even having publicly launched, I have back-to-back monthly promising projects lined up, each with significant estimated impact, each with higher impact than my upper bound estimates of my ability to earn via for-profit founding (my next highest career option).

How did you determine this? Did you explicitly quantify the impact of the promising... (read more)

Hi Jan. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?

Thanks for the good point, Paul. I tend to agree.

Thanks for the post. I strongly upvoted it.

I have described my views on AI risk previously in this post, which I think is still relevant. I have also laid down a basic argument against AI risk interventions in this comment where I argue that AI risk is neither important, neglected nor tractable.

The 2nd link is not right?

Thanks for the comment, Mikhail. Gemini 3 estimates a total annualised compensation of the people working at Meta Superintelligence Labs (MSL) of 4.4 billion $. If an endorsement from Yudkowsky and Soares was as beneficial (including via bringing in new people) as making 10 % of people there 10 % more impactful over 10 years, it would be worth 440 M$ (= 0.10*0.10*10*4.4*10^9).

5
Nick K.
You could imagine a Yudkowsky endorsement  (say with the narrative that Zuck talked to him and admits he went about it all wrong and is finally taking the issue seriously just to entertain the counterfactual...) to raise meta AI from "nobody serious wants to work there and they can only get talent by paying exorbitant prices" to "they finally have access to serious talent and can get a critical mass of people to do serious work". This'd arguably be more valuable than whatever they're doing now.  I think your answer to the question of how much an endorsement would be worth mostly depends on some specific intuitions that I imagine Kulveit has for good reasons but most people don't, so it's a bit hard to argue about it. It also doesn't help that in every other case than Anthropic and maybe deepmind it'd also require some weird hypotheticals to even entertain the possibility.

Thanks for the good point, Nick. I still suspect Anthropic would not pay e.g. 3 billion $ for Yudkowsky and Soares to endorse their last model as good if they were hypothetically being honest. I understand this is difficult to operationalise, but it could still be asked to people outside Anthropic.

@eleanor mcaree, to which extent is ACE's Movement Grants program open to funding research decreasing the uncertainty in interspecies welfare comparisons? @Jesse Marks, how about The Navigation Fund (TNF)? @Zoë Sigle 🔹, how about Senterra Funders? @JamesÖz 🔸, how about Mobius and the Strategic Animal Funding Circle (SAFC)? You can check my comment above for context about why I think such research would be valuable. 

4
Zoë Sigle 🔹
Hi Vasco, Senterra Funders' FAQ should answer your questions.

It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.

My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best... (read more)

3
Tristan Katz
By power I mean: ability to change the world, according to one's preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can. No human is currently so powerful as to have power over all other humans, and I think that's definitely a good thing. But it doesn't seem like it would take much more advantage to let one intelligent being dominate all others.

If I had Eliezer's views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.

2
NickLaing
100 percent agree. I was going to write something similar but this is better

Thanks for the comment, Tristan.

I have no doubt that if one human became superintelligent that would also have a high risk of disaster, precisely because they would have preferences that I don't share (probably selfish ones)

I would worry if a single human had much more power than all other humans combined. Likewise, I would worry if an AI agent had more power than all other AI agents and humans combined. However, I think the probability of any of these scenarios becoming true in the next 10 years is lower than 0.001 %. Elon Musk has a net worth of 765 bill... (read more)

4
Guy Raveh
Elon Musk has already used this power to do actions which will potentially kill millions (by funding the Trump campaign enough to get to close down USAID). I think that should worry us, and the chance of people amassing even more power should worry us even more.
3
Tristan Katz
I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined... until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly  infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.   I think a good point that Y&S make is that we shouldn't expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like  that is evolving and - at least from my perspective - pretty hard to understand.

Hi Jan.

For the companies racing to AGI, Y&S endorsing some effort as good would likely have something between billions $ to tens of billions $ value.

Are you open to bets about this? I would be happy to bet 10 k$ that Anthropic would not pay e.g. 3 billion $ for Yudkowsky and Soares to endorse their last model as good. We could ask the marketing team at Anthropic or marketing experts elsewhere. I am not officially proposing a bet just yet. We would have to agree on a concrete operationalisation.

4
Jan_Kulveit
The operationalisation you propose does not make any sense, Yudkowsky and Soares do not claim ChatGPT 5.2 will kill everyone or anything like that.  What about this: MIRI approaches [a lab] with this offer: we have made some breakthrough in ability to verify if the way you are training AIs leads to misalignment in the way we are worried about. Unfortunately the way to verify requires a lot of computations (ie something like ARC), so it is expensive.  We expect your whole training setup will pass this, but we will need $3B from you to run this; if our test will work, we will declare that your lab solved the technical part of AI alignment we were most worried about & some arguments which we expect to convince many people who listen to our views.  Or this: MIRI discusses stuff with xAI or Meta and convinces themselves their - secret - plan is by far the best chance humanity has, and everyone ML/AI smart and conscious should stop whatever they are doing and join them. (Obviously these are also unrealistic / assume something like some lab coming with some plan which could even hypotehically work)
5
Nick K.
This doesn't seem to be a reasonable way to operationalize. It would create much less value for the company if it was clear that they were being paid for endorsing them. And I highly doubt Amodei would be in a position to admit that they'd want such an endorsement even if it indeed benefitted them. 
3
MikhailSamin
It's not endorsing a specific model for marketing reasons; it's about endorsing the effort, overall. Given that Meta is willing to pay billions of dollars for people to join them, and that many people don't work on AI capabilities (or work, e.g., at Anthropic, as a lesser evil) because they share their concerns with E&S, an endorsement from E&S would have value in billions-tens of billions simply because of the talent that you can get as a result of this.

Thanks for sharing, Michael. If I was as concerned about AI risk as @EliezerYudkowsky, I would use practically all the additional earnings (e.g. above Nate's 235 k$/year; in reality I would keep much less) to support efforts to decrease it. I would believe spending more money on personal consumption or investments would just increase AI risk relative to supporting the most cost-effective efforts to decrease it.

A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like

If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job.

I kinda don't think this was the best use of a million dollars, but I can see the argument for how it might be.

Hi Saulius.

When local residents successfully protest against a planned chicken farm, production will likely increase elsewhere to meet demand—but how quickly? I found no clear methodology to estimate this and received no definitive answers when I asked on an economics forum. As far as I know, it could take anywhere from a week to 20 years, and the choice massively impacts cost-effectiveness.

I agree a farm being blocked can decrease anything from a 1 farm-week to 20 farm-years. Gemini says random broiler farms in Poland have an expected lifespan o... (read more)

Wow I'm mind blown that Yudowsky pays himself that much. If only because it leaves him open to criticisms likt these. I still don't think the financial incentives are as strong as for people starting an accellerationist company, but its a fair point.

I think the strength of the incentives to behave in a given way is more proportional to the resulting expected increase in welfare than to the expected increase in net earnings. Individual human welfare is often assumed to be proportional to the logarithm of personal consumption. So a given increase in earnings... (read more)

Hi Nick.

Although their arguments are reasonable, my big problem with this is that these guys are so motivated that I find it hard to read what they write in good faith.

People who are very invested in arguing for slowing down AI development, or decreasing catastrophic risk from AI, like many in the effective altruism community, will also be happier if they succeed in getting more resources to pursue their goals. However, I believe it is better to assess arguments on their own merits. I agree with the title of the article that it is difficult to do this. I a... (read more)

Thanks for this work. I find it valuable.

If AIs are conscious, then they likely deserve moral consideration

AIs could have negligible welfare (in expectation) even if they are conscious. They may not be sentient even if they are conscious, or have negligible welfare even if they are sentient. I would say the (expected) total welfare of a group (individual welfare times population) matters much more for its moral consideration than the probability of consciousness of its individuals. Do you have any plans to compare the individual (expected hedonistic) welfa... (read more)

4
Derek Shiller
This is an important caveat. While our motivation for looking at consciousness is largely from its relation to moral status, we don't think that establishing that AIs were conscious would entail that they have significant states that counted strongly one way or the other for our treatment of them, and establishing that they weren't conscious wouldn't entail that we should feel free to treat them however we like. We think that it estimates of consciousness still play an important practical role. Work on AI consciousness may help us to achieve consensus on reasonable precautionary measures and motivate future research directions with a more direct upshot. I don't think the results of this model can be directly plugged into any kind of BOTEC, and should be treated with care. We favored a 1/6 prior for consciousness relative to every stance and we chose that fairly early in the process. To some extent, you can check the prior against what you update to on the basis of your evidence. Given an assignment of evidence strength and an opinion about what it should say about something that satisfies all of the indicators, you can backwards infer the prior needed to update to the right posterior. That prior is basically implicit in your choices about evidential strength. We didn't explicitly set our prior this way, but we would probably have reconsidered our choice of 1/6 if it was giving really implausible results for humans, chickens, and ELIZA across the board. There is a tension here between producing probabilities we think are right and producing probabilities which could reasonably act as a consensus conclusion. I have my own favorite stance, and I think I have good reason for it, but I didn't try to convince anyone to give it more weight in our aggregation. Insofar as we're aiming in the direction of something that could achieve broad agreement, we don't want to give too much weight to our own views (even if we think we're right). Unfortunately,among people with signi

Thanks for the update. Do you plan to publish any cost-effectiveness analyses of grants you have made?

Thanks for the post, Carl.

  • As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity

Great point.

Hi Linch. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?

The bets I've seen you post seem rather disadvantageous to the other side, and I believed so at the time. Which is fine/good business from your perspective given that you managed to find takers. But it means I'm more pessimistic on finding good deals by both of our lights.

Load more