All Comments

Settings

For Inkhaven, I wrote 30 posts in 30 days. Most of them are not particularly related to EA, though a few of them were. I recently wrote some reflections @Vasco Grilo🔸 thought it might be a good idea to share on the EA Forum; I don't want to be too self-promotional so I'm splitting the difference and posting just a shortform link here:

https://linch.substack.com/p/30-posts-in-30-days 

The most EA-relevant posts are probably

https://inchpin.substack.com/p/skip-phase-3

https://inchpin.substack.com/p/aging-has-no-root-cause

https://inchpin.substack.com/p/legi... (read more)

We may be running multiple smaller cohorts rather than one big one, if that's what maximizes the ability of strong candidates to participate. 

The single most important factor in deciding the timing is the window in which strong candidates are available, and the target size for the cohort is small enough (5-20 depending on strength of applicants) that the availability of a single applicant is enough to sway the decision. It's specifically cases like yours that we're intending to accommodate. Please apply!

A small update on each of these project ideas, for the end of 2025:

  • ORCID-TAXID mapping tool: Alejandro Acelas and Hanna Pálya have created Cliver, which ended up supported by an Astral Codex Ten grant. I believe they are validating with synthesis providers right now; unclear where the project goes long-term, but they've made a lot of progress.
  • Customer Screening Training Dataset: It looks pretty likely that IBBIS and EBRC will work on this project in 2026, building off IBBIS's work on customer screening and EBRC's work on end-to-end stress testing
  • Biosec
... (read more)

I just revisited this post from my forum wrapped and am glad about the bonus comic. Thank you for the nice content. :)

That makes sense, I don't want to be overly fussy if it was getting most things right. I guess the thing is, it's not helpful if it mostly recognizes true facts as true but mistakes some true facts as false, if it does not accurately flag a significant number of incorrect facts, which in clicking through a bunch of flags I didn't see almost any I thought necessitated an edit.

I'm in academia and my plan A is to pivot my research focus to something impactful.

Time will tell though, I'm open to considering other options if they arise.

I saw so many people who wanted a “job in EA”. They wanted to directly do the good. Have they really thought through the bitter truth? Why do you believe you are uniquely good at an EA job, why ignore the simple premise of earning to give?

 

I think there's a large number of EAs who earn to give and spend their time focusing on their career rather than spending time reading another 5,000 word forum article on shrimp or going to EA meetups. This is probably the right move if the goal is to earn as much as possible.

People who want "EA jobs" are more likely to be involved in the forum and in community events.

GiveWell (all grants), GiveDirectly, Malaria Consortium. Pretty small amount in total because I'm a student, but feels good to be getting started a bit!

Then it should be quite easy to show this benefit in clinical trials and it's suspicious that it hasn't happened

I'm looking now at the Fact Check. It did verify most of the claims it investigated on your post as correct, but not all (almost no posts get all, especially as the error rate is significant). 

It seems like with chickens/shrimp it got a bit confused by numbers killed vs. numbers alive at any one time or something.

In the case of ICAWs, it looked like it did a short search via Perplexity, and didn't find anything interesting. The official sources claim they don't use aggressive tactics, but a smart agent would have realized it needed to search more. I think to get this one right would have involved a few more searches - meaning increased costs. There's definitely some tinkering/improvements to do here.

It looks like maybe 60% fallacy check and 40% fact check. For instance, fact check:

  • claims there are more farmed chickens than shrimps (!)
  • Claims ICAW does not use aggressive tactics, apparently basing that on vague copy on their website

Thanks! I wouldn't take its takes too seriously, as it has limited context and seems to make a bunch of mistakes. It's more a thing to use to help flag potential issues (at this stage), knowing there's a false positive rate. 

Thanks for the feedback! 

I did a quick look at this. I largely agree there were some incorrect checks. 

It seems like these specific issues were mostly from the Fallacy Check? That one is definitely too aggressive (in addition to having limited context), I'll work on tuning it down. Note that you can choose which evaluators to run on each post, so going forward you might want to just skip that one at this point.

Interesting idea.

As we switch to wind/solar, you can get the same energy services with less primary energy, something like a factor of 2.

We’re a factor ~500 too small to be type I.

  • Today: 0.3 VPP
  • Type I: 40 VPP

 

But 40 is only ~130X 0.3.

There is some related discussion here about distribution.

Incidentally, ‘flipping non-EA jobs into EA jobs’ and ‘creating EA jobs’ both seem much more impactful than ‘taking EA jobs’. That could be e.g. taking an academic position that otherwise wouldn’t have been doing much and using it to do awesome research / outreach that others can build on, or starting an EA-aligned org with funding from non-EA sources, like VCs.

(excerpt from https://lydianottingham.substack.com/p/a-rapid-response-to-celeste-re-e2g

Some good news since this post was written a few years ago: the usage of cleaner fish in Norway has declined from a peak of 60 million in 2019 to 24 million in 2024.[1] From what I've read this seems to be due to both pressure from the media and Norwegian authorities[2] and also growing use of other methods like laser delousing, which showed positive results in a recent study.[3] (I wasn't able to tell how important each factor was in causing this decline.)

More good news is that the country with the second largest salmon industry, Chile, has... (read more)

Sounds like a great class! What a gift to be exposed to all these schools of thought as a young adult.

I love this idea! I just took it for a spin and the quality of the feedback isn't at a point I would find it very useful yet. My sense is that it's limited by the quality of the agents rather than anything about the design of the app, though maybe changes in the scaffold could help.

Most of the critiques were myopic, such as:

  • It labeled one sentence in my intro as a "hasty generalization/unsupported claim" when I spend most of the post supporting that statement.
  • In one sentence, it raised a flag for "missing context" about a study I reference, with a differen
... (read more)

[sorry I’m late to this thread]

@William_MacAskill, I’m curious which (if any) of the following is your position?

1.

“I agree with Wei that an approach of ‘point AI towards these problems’ and ‘listen to the AI-results that are being produced’ has a real (>10%? >50%?) chance of ending in moral catastrophe (because ‘aligned’ AIs will end up (unintentionally) corrupting human values or otherwise leading us into incorrect conclusions).

And if we were living in a sane world, then we’d pause AI development for decades, alongside probably engagi... (read more)

Wanted to bring this comment thread out to ask if there's a good list of AI safety papers/blog posts/urls anywhere for this?

(I think local digital storage in many locations probably makes more sense than paper but also why not both)

Lightcone and Alex Bores (so far)

Perhaps the main downside is people may overuse the feature and it encourages people to spend time making small comments, whereas the current system nudges people towards leaving fewer more substantive comments and less nit-picky ones? Not sure if this has been an issue on LW, I don't read it as much.

Executive summary: The author argues that donating to the Berkeley Genomics Project is justified because accelerating safe, beneficial reprogenetics could substantially reduce disease, amplify human intelligence, and lower AI existential risk, and the project targets neglected medium-term technical and social gaps with early field-building traction despite high uncertainty.

Key points:

  1. The author claims effective reprogenetics could greatly improve lives by reducing disease risk and enabling parents to make genomic choices for future children.
  2. The author argu
... (read more)

Executive summary: Drawing on personal experience as a London-based Research Manager at MATS in 2025, the author reflects on research management as a generalist, service-oriented role combining scholar support, mentoring enablement, people management, and internal projects, concluding that it is highly rewarding and impactful despite trade-offs that ultimately motivated a transition to AISI.

Key points:

  1. The author frames research management as “servant leadership” plus “radical candor,” focused on helping scholars set goals, unblock progress, and receive tim
... (read more)

Executive summary: This report maps the current landscape of AI innovation in aquaculture, finding that commercially available AI tools are already widespread, concentrated in stock and growth management for high-value species like salmon and shrimp, and likely to become increasingly embedded in farm operations despite unclear implications for animal welfare.

Key points:

  1. The authors identified 91 companies with AI-enabled aquaculture products that are already on the market and could affect farmed animal welfare.
  2. Stock and growth management is the most common
... (read more)

I'm in favor. Mostly because it seems mildly useful, not because there are very big upsides outweighing big downsides. I don't really see what the downsides would be.

Executive summary: The post argues, confidently and polemically, that earning to give is an underrated and often superior way for most people to do good, because large, sustained donations typically outweigh the impact of personal lifestyle changes or pursuing “sexy” direct-impact jobs.

Key points:

  1. The author claims common moral intuitions about “being a good person” focus on visible kindness and lifestyle choices, but perform poorly when judged by actual impact.
  2. They argue that high earners who donate large sums, such as ~$200K+ per year to effective chariti
... (read more)

Centre for Wild Animal Welfare (and the EA Animal Welfare Fund earlier this year)

For what it is worth (anecdotal, I know) I have personally (face to face) spoken to no less than three people who have used DMT to knock out a cluster headache and could describe the process in great detail. The causality is pretty noticeable and clear.

I'd say it's a medium-sized deal. Academics can often propose ideas and show that they work on smaller (eg 7b) models. However, then it requires someone with a larger compute budget to like the idea & results and implement it at a larger scale.

There are some areas where access to compute is less important like MechInterp, RedTeaming, creating Benchmarks or more theoretical areas of AI research. Areas are more amenable to academic research if they don't require training Frontier models. Eg inference or small fine-tuning runs on Frontier Models are actua... (read more)

Question: Has anyone here applied to a role they found on the EA Opportunities board? (Attaching an image of what it used to look like + what it looks like now).

I’m curious if you ended up getting the role or not, and would really appreciate hearing either way. I’m trying to get a sense of how many applications and placements the board is leading to. Happy to DM if that’s easier!

Love this!

I'm a big proponent using love of humanity as a motivator. It's true that guilt and/or rationalism can be motivating, but I've found that helping people because people are what make your life worth living seems like a much healthier way, and is even more motivating (more effective).

You nail the sentiment in your post on Life in a Day. Of course it would be nice if our biological evolution led us to being highly motivated by numbers on a spreadsheet, but operating on the hardware we have, the feeling Life in a Day gives is massively more motivatin... (read more)

I think it's also worth saying: one-day conferences usually require two nights of a hotel, that the attendee pays for, unless they're in day travel range. You can thereby quite reasonably ask for a higher entry fee for a retreat, as it would be what would otherwise be spent on a hotel.

This is really nice to hear, honestly. Those results sound genuinely meaningful, especially at that scale.

Would love to stay in touch and compare notes as you figure out how to do more of this 🙌

For people interested in this type of content, Yale has/had a similar course called "Life Worth Living", mostly from a religious rather than philosophy perspective. A variety of interviews with past guests: https://lifeworthliving.yale.edu/practitioners

This is a good take. 80k are good at it, bluedot too, and GWWC have started doing good things as well.

I think national orgs like EA Netherlands are well-positioned to do more, but we're only just waking up to this and are learning how best to allocate a portion of the EUR 30-40k in unrestricted funding we get from CEA. At EAN we've started working with Amplify and a marketing agency and have had great results (3x'd our intro programme completions and increased our EAGx attendance by 35%). Would like to do more of this in the future if we can find the money/re-allocate more of our funds.

Wrapped sort of feels like a roundabout way to give myself a compliment lol.

I didn't know who my most-read authors would be though - thanks for all the great posts @Vasco Grilo🔸, @Bentham's Bulldog, @Lizka !

I'm also a top 1% @Lizka reader, in part because I read the Forum norms doc so often. Lizka's great work on the Forum is still paying dividends - nice one!

+1 on we should be able to implement this. I welcome takes for and against (I think the team has been split on this, personally I'm quite pro). 

MacAskill:

Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal [...]

For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I pro

... (read more)

EA Global NYC will be taking place 16-18 Oct 2026 at the Sheraton Times Square. Applications for NYC, and all 2026 EAGs, are open now!

After the success of last year's event, our first EAG in NYC and our largest US EAG in years, we're excited to return and build on last year. For more information visit our website and contact hello@eaglobal.org with any questions.

Just to clarify, is the 8-week period for all participants? And if so, will you still accept some applications after the date has be decided?


I might apply but I could only participate if the program was organized in July-August. But given that it could occur any time between February and August, I probably won't apply since it's only like 1/7th chance it will start in July.

Thanks for sharing this. it honestly makes me a bit sad to read, but in a thoughtful way. I still want to hope there is room to influence this over time, even if it’s slow and uneven.

I really appreciate that you found a way to keep having impact “through the side door,” and to stay engaged with the community rather than fully disengaging. That feels important.

I’d genuinely love to connect, compare notes, and trade ideas or intros 🙏

Agree entirely (and I have MORE doubts, even as i have been a vegan for almost 30 years).
It is indeed a very narrow and demanding identity (and even more so when other progressive issues are presented as part and parcel of the vegan lifestyle).

It's noteworthy that the founders of the vegan society in the 1940's welcome everyone who was looking in the same direction, even if they weren't "practitioners".

 

Strong agree. I think some of that resistance comes from past comms “dramas” — for example around earning to give. It was pushed quite hard at one point, and that ended up shaping the public perception as if that’s the EA message, which understandably made people more cautious afterward.

At the same time, I find it interesting that initiatives like School for Moral Ambition are now communicating very similar underlying ideas, but in a way that feels much more accessible to “normal” people — and they haven’t faced anything like the same backlash.

To me that suggests it’s not that these ideas can’t be communicated broadly, but that how we frame and translate them really matters.

Thanks for putting this so beautifully together.

I remember the first time I was at a retreat with a lot of activists, and i had the same realization, they were all just regular people...

Simón, gracias por abrir esta conversación tan necesaria.

Participo en EA Madrid y estoy estudiando con BlueDot Impact, y una de las primeras barreras que vi fue exactamente esta: la ausencia casi total de recursos en español para quienes quieren profundizar más allá de lo básico.

Coincido completamente en que no se trata solo de traducir, sino de generar contenido original que dialogue con las realidades de nuestros contextos. Cuando intento explicar AI safety o cost-effectiveness a colegas en Madrid o a mi red en Colombia, constantemente me encuentro tradu... (read more)

This resonates deeply, especially the line: "Organizations without clear stories hit friction, even when doing excellent work."

I've seen this in my own career transition into EA, I had the skills and the commitment, but until I could articulate why my background in international partnerships and data operations connected to AI safety and global health work, I struggled to make others see the fit.

Your framework around Mission → ToC → OKRs → KPIs → Team is brilliant because it shows that organizational storytelling isn't just "marketing" – it's strategic cla... (read more)

Agreed.

One data point: in the recent EA community retreat I organized for 65 people in France in 2025 (not a "premium" retreat), the cost per participant was 156€. This includes my time as well as financial support from participants.

I tend to see these types of events as complementary. I think we should not treat their various outcomes as fungible. You get results of different, non-tradeable kinds. In particular:

  • Differents types of participants
    • Different types of impact.

Helen Keller Europe, Mieux Donner, Against Malaria

AMF (1:1 matched through EANZ on top of usual tax advantages), ALLFED and a couple curries to a CB

Yep 100% agree with the weakness in EA comms. I'm happy there's been a fair amount of chat recently about this on the forum.

Cool! 

Could be a worthwhile home investment for particularly immunocompromised people too. 

I sent this to a friend who had really bad covid several times. 

I'm not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.

I hope that moral progress on animal rights/animal welfare will take much less than 1,000 years to achieve a transformative change, but I empathize with your disheartened feeling about how slow progress has been. Something taking centuries to happen is slow by human (or animal) standards but relatively fast within the timescales that longtermism often thinks about.

The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.

How much reduction in funding for non-AI global catastrophic risks has there been…?

I agree, though I think the large reduction in EA funding for non-AI GCR work is not optimal (but I'm biased with my ALLFED association).

Wow, sounds like a really format to have different philosophers all come and pitch their philosophy as the best approach to life!  I'd love to take a class like that.

Ah... now I see you above and I realized I could mouse over - it is year of crazy. So you think the world will get crazy two years after AGI.

Super cool - a bit hectic and I substantively disagree with one of the "fallacies" the fallacy evaluator flagged on this post but I'll definitely be using this going forward

Thanks for the highlight! Yeah I would love better infrastructure for trying to really figure out what the best uses of money are. I don't think it has to be as formal/quantitative as GiveWell. To quote myself from a recent comment (bolding added)

At some level, implicitly ranking charities [eg by donating to one and not another] is kind of an insane thing for an individual to do - not in an anti-EA way (you can do way better than vibes/guessing randomly) but in a "there must be better mechanisms/institutions for outsourcing donation advice than GiveWell an

... (read more)

I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term "longtermism" was coined.

I think, independently from anything to do with the term "longtermism", there is plenty you could criticize in Bostrom's work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.

But that's a point about Bostrom's work that long predates the term "longtermism", not a point about whether coining and promoting that term was a good idea or not.

I think the fact that the term didn't add anything new is very bad because it came with a great cost. When you create a new set of jargon for an old idea you look naive and self-important. The EA community could have simply used framing that people already agreed with, instead they created a new term and field that we had to sell people on.

Discussions of "the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization" were elaborate and off-putting, when their only conclusions were the same old obvi... (read more)

I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than 'share your best guess of how you would allocate a billion dollars'.

My biggest takeaway from the comments so far is that many/most of the commenters don't care whether longtermism is a novel idea, or at least care about that much less than I do. I never really thought about that before — I never really thought that would be the response.

I guess it's fine to not care about that. The novelty (or lack thereof) of longtermism matters to me because it sure seems like a lot of people in EA have been talking and acting like it's a novel idea. I care about "truth in advertising" even as I also care about whether something is a goo... (read more)

Harry Lloyd in the philosophy department can do this.

While I find much of this post to be plausible, I’m not sure Ollie’s post supports your conclusions.

Ollie’s post is evaluating a set of retreats which averaged a cost of $1,500 per person. As commenters on the post noted, this seems very high. (I recall reading that low end EAG costs are around the same spot.) For the one retreat I’m aware of, costs were 6-7x less. (This doesn’t include CEA staff costs, but those shouldn’t be able to make up the gap.) 

Additionally, you write about how retreats might have lower outcomes due a lack of scale. While I’m s... (read more)

One of the more excellent comments I've ever read on the EA Forum. Perceptive and nimbly expressed. Thank you.

people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias.

Very well said!

To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time

I totally agree. To be clear, I support mitigation of existential risks, global catastrophic risks, and all sorts of low-probab... (read more)

In percentages of pretax salary:
* 15% GiveWell
* 3% AI Safety orgs
* 1% Lightcone

Wow, this makes me feel old, haha! (Feeling old feels much better than I thought it would. It's good to be alive.)

There was a lot of scholarship on existential risks and global catastrophic risks going back to the 2000s. There was Nick Bostrom and the Future of Humanity Institute at Oxford, the Global Catastrophic Risks Conference (e.g. I love this talk from the 2008 conference), the Global Catastrophic Risks anthology published in 2008, and so on. So, existential risk/global catastrophic risk was an idea about which there had already been a lot of study e... (read more)

I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.

If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermi... (read more)

I'm not especially familiar with the history - I came to EA after the term "longtermism" was coined so that's just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old -> not neglected. And that does not follow. I don't know how old the idea of longtermism is. I don't particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.

Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I don't know if that's exactly what you were saying, but I'm happy to concede that point anyway.

To be clear, NASA's NEO Surveyor mission is one of the things I'm most excited about in the world. It makes me feel so happy thinking about it. And ... (read more)

I agree that the scholarship of Bostrom and others starting in the 2000s on existential risk and global catastrophic risk, particularly taking into account the moral value of the far future, does seem novel, and does also seem actionable and important, in that it might, for example, make us re-do a back-of-the-envelope calculation on the expected value of money spent on asteroid defense and motivate us to spend 2x more (or something like that).

As someone who was paying attention to this scholarship long before anyone was talking about "longtermism", I was ... (read more)

If you're saying that longtermism is not a novel idea, then I think we might agree.

Everything is relative to expectations. I tried to make that clear in the post, but let me try again. I think if something is pitched as a new idea, then it should be a new idea. If it's not a new idea, that should be made more clear. The kind of talk and activity I've observed around "longtermism" is incongruent with the notion that it's an idea that's at least decades and quite possibly many centuries old, about which much, if not most, if not all, the low-hanging fruit ha... (read more)

I'm not sure how "tool to use" and "resource to share" are so different - what do you think the important distinction is there? 

Completely agree on the responsibility front.

Yeah you literally wrote:

"Under my Christian worldview, nothing I have is really 'mine' anyway, and part of being a good human is to pass on what I've been handed, and even better multiply it if possible."

 

I think how I see it feels a bit different because I see money more as tool to use than as a resource to share. I think it should be used to help improve the lives of others, but it does importantly feel that it's my responsibility that mine gets used that way. Not sure if that makes sense.

Even to those otherwise sympathetic to SFE, its orientation toward subtraction can be demotivating.

 

It would not be wrong to assert that the entire process of civilization consists of controlling innate human aggression and, therefore, that all moral efforts to ultimately improve society have a subtractive structure: do not aggress, do not harm, do not tolerate suffering.

Compassionate religious philosophies have thus attempted to develop "positive" abstract concepts capable of emotionally engaging the believer in an ideology of altruism and benevolenc... (read more)

Yes. One of the Four Focus Areas of Effective Altruism (2013) was "The Long-Term Future" and "Far future-focused EAs" are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.

To the 3 main GWWC funds, GiveDirectly, Lightcone, EA, Wikipedia.

Strong agree from FarmKind’s perspective. An equal bugbear for me is that to the extent EA orgs focus on comms, they’re insufficiently focused on how to communicate to non-EAs. There seems to be a resistance to confront the fact that to grow we need to appeal to normal people and that means speaking to them the way that works for them, rather than what would work for us

When I read this, it made me realise that where we donate really matters. In our country, politics, weak organisations, corruption, and money laundering can make things worse with money. That’s why choosing transparent, proven charities is so important if we truly want to help people.

I believe we can apply more workframes because our country faces numerous competing needs, yet has limited public funds and capacity. Using scale, neglectedness, and solvability helps government and organisations prioritise programmes that deliver the greatest economic and social return, instead of spreading resources too thinly or relying only on intuition or political parties.

This reading relates to our economy because resources are limited in our country. If we choose the most effective programs, such as health and social support, we can help more people and reduce poverty more quickly. Some actions help many more people than others, so thinking carefully about where support goes can make a bigger difference.

I’d heard a lot about Zoe but nothing as in depth as this, thank you for sharing! Truly inspiring :)

Sorry again about that ! Glad it's working now!

Here's the Unjournal evaluation package

A version of this work has been published in the International Journal of Forecasting under the title "Subjective-probability forecasts of existential risk: Initial results from a hybrid persuasion-forecasting tournament"
 

We're working to track our impact on evaluated research (see coda.io/d/Unjournal-...) So We asked Claude 4.5 to consider the differences across paper versions, how they related to the Unjournal evaluator suggestions, and whether this was likely to have been causal.

See Claude's report here  ... (read more)

EA animal welfare fund

Long term future fund

Community building

EA animal welfare fund

www.bureauburgerberaad.nl

It does look like most studies suggested small or no effects after less than 10 meters away, but I wonder how much they focused on eggs, larvae and zooplankton, which are plausibly more sensitive. For example, from this study (discussion):

Experimental air gun signal exposure decreased zooplankton abundance when compared with controls, as measured by sonar (~3–4 dB drop within 15–30 min) and net tows (median 64% decrease within 1 h), and caused a two- to threefold increase in dead adult and larval zooplankton. Impacts were observed out to the maximum 1.2 km

... (read more)

Unfathomably based. Im stealing this one and its relevant ideas:

Choosing to donate based on the cost-effectiveness of helping is making a radical political statement about equality.

How big a deal is access to compute? In order to perform research on frontier models, are we to the point where only the companies with the largest compute and training budgets can play?

I've found it useful both for posts and for considering research and evaluations of research for Unjournal, with some limitations of course.

- The interface can be a little bit overwhelming as it reports so many different outputs at the same time some overlapping 

+ but I expect it's already pretty usable and I expect this to improve.

+  it's an agent-based approach so as LLM models improve you can swap in the new ones.

I'd love to see some experiments with directly integrating this into the EA forum or LessWrong in some ways, e.g. automatically doin... (read more)

Load more