All of Jonas Vollmer's Comments + Replies

Twitter-length responses to 24 AI alignment arguments

I'm not confident, sorry for implying otherwise. 

After this discussion (andespecially based on Greg's comment), I would revise my point as follows:

  • The AI might kill us because 1) it sees us as a threat (most likely), 2) it uses up our resources/environment for its own purposes (somewhat likely), or 3) it converts all matter into whatever it deems useful instantly (seems less likely to me but still not unlikely).
  • I think common framings typically omit point 2, and overemphasize and overdramatize point 3 relative to point 1. We should fix that.
  • Is this is
... (read more)
EA Hub Berlin / German EA Content: Two meta funding opportunities

I'm excited about the success of Effektiv Spenden, and excited about the idea of producing German EA content (if executed well). I'm unconvinced by the coworking space and sent you some feedback/input via email.

Thanks for your feedback. Just created an anonymous feedback form for people who have spent time at TEAMWORK to get more critical input (will put it on our website, in our handbook etc. as well).

I think some of your concerns could be addressed with more funding. Others probably only if a bigger player like CEA goes all in and opens up its own EA-Coworking/Event-Space in Berlin. As far as I know no such plans exist, but if I’m wrong please let me know. Happy to completely focus on our core business.

Compared to the situation in Berlin before we opened up TE... (read more)

EA is more than longtermism

The EAIF funds many of the things you listed and Peter Wildeford has been especially interested in making them happen!  Also, the Open Phil GHW team is expanding a lot and has been funding several excellent grants in these areas.

That said, I agree with the overall sentiment you expressed and definitely think there's something there.

One effect is also: there's not so much proactive encouragement to apply for funding with neartermist projects, which results in fewer things getting funded, with results in people assuming that there's no funding, even tho... (read more)

How many EAs failed in high risk, high reward projects?

I think some of the worst failures are mediocre projects that go sort-of okay and therefore continue to eat up talent for a much longer time than needed; cases where ambitious projects fail to "fail fast". It takes a lot of judgment ability and self-honesty to tell that it's a failure relative to what one could have worked on otherwise.

One example is Raising for Effective Giving, a poker fundraising project that I helped found and run. It showed a lot of promise in terms of $ raised per $ spent over the years it was operating, and actually raised $25m for ... (read more)

EA and the current funding situation

.

[This comment is no longer endorsed by its author]Reply
EA and the current funding situation

Regarding Harming quality of thought, my main worry  is a more subtle one:

It is not that people might end up with different priorities than they would otherwise have, but that they might end up with the same priorities but worse reasoning

I.e. before there was a lot funding, they thought "Oh I should really think about what to work on. After thinking about it really careful, X seems most important". 

Now they think "Oh X seems important and also what I will get funded for, so I'll look into that first. After looking into it, I agree with fun... (read more)

Doing good easier: how to have passive impact

I think the "passive impact" framing encourages us too much to start lots of things and delegate/automate them. I prefer "maximize (active or passive) impact (e.g. by building a massively scalable organization)". This includes the strategy "build a really excellent org and obsessively keep working on it until it's amazing", which doesn't pattern-match "passive impact" and seems superior to me because a lot of the impact is often unlocked in the tail-end scenarios.

You might argue that excellent orgs often rely on a great deal of delegation and automation, a... (read more)

2Kat Woods4mo
Yeah, it's an interesting question whether, all else being equal, it's better to set up many passive impact streams or build one very amazing and large organization. I think it all depends on the particulars. Some factors are: * What's your personal fit? I think a really important factor is personal fit. Some people love the idea of staying at one organization for ten years and deeply optimizing all of it and scaling it massively. Others have an existential crisis just thinking of the scenario. Passive impact is a better strategy for when you like things when they're small and super startup vibe and for if you find it hard to stay interested in the same thing for years on end. * What sort of passive impact are you setting up? I think obsessively optimizing an amazing organization and working hard on replacing yourself with a stellar person, such that it continues to run as an amazing org without you beats starting and staying on the same org probably. On the other hand, digital automation tends to decay a lot more without at least somebody staying on to maintain the project, and that would on average be beaten by optimizing a single org.
The Wicked Problem Experience

Here's a provocative take on your experience that I don't really endorse, but I'd be interested in hearing your reaction to:

Finding unusually cost-effective global health charities isn't actually a wicked problem. You just look into the existing literature on global health prioritization, apply a bunch of quick heuristics to find the top interventions, find charities implementing them, and then see which ones will get more done with more funding. In fact, Giving What We Can independently started recommending the Against Malaria Foundation through a process

... (read more)
FTX/CEA - show us your numbers!

Personally I think going for something like 50k doesn't make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others.

Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they're aiming for highly engaged people, and don't pay as much attention to how many less engaged people they're causing.

3Jeff Kaufman4mo
Peter was using a bar of "actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)". GWWC is 8k on its own, though there's probably been substantial attrition. But yes, because we expect impact to be power-lawish if you order all plausible EAs by impact there will probably not be any especially compelling places to draw a line.
FTX/CEA - show us your numbers!

Yeah I fully agree with this; that's partly why I wrote "gestures". Probably should have flagged it more explicitly from the beginning.

EA Houses: Live or Stay with EAs Around The World

Are you curating the spreadsheet in any way? In particular, do you have a mechanism for removing entries submitted by people who have in the past made unwanted sexual advances or otherwise a track record of not respecting community members' boundaries?

3Emerson Spartz4mo
Light touch curation, yes - we'd certainly appreciate a heads up on anything like this!
FTX/CEA - show us your numbers!

I'd personally be pretty excited to see well-run analyses of this type, and would be excited for you or anyone who upvoted this to go for it. I think the reason why it hasn't happened is simply that it's always vastly easier to say that other people should do something than to actually do it yourself.

I completely agree that it is far easier to suggest an analysis than to execute one! I personally won't have the capacity to do this in the next 12-18 months, but would be happy to give feedback on a proposal and/or the research as it develops if someone else is willing and able to take up the mantle. 

I do think that this analysis is more likely to be done (and in a high quality way) if it was either done by, commissioned by, or executed with significant buy-in from CEA and other key stakeholders involved in community building and running local g... (read more)

2IanDavidMoss4mo
Agreed! Note, however, that in the case of the FTX grants it will be pretty hard to do this analysis oneself without access to at the very least the list of funded projects, if not the full applications.
FTX/CEA - show us your numbers!

I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.

This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 4999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That's a factor of 30x, so a framing like "likely vastly more valuable" would seem more appropriate to me.

2Jeff Kaufman4mo
I know this isn't your main point, but that's ~1/10 what I would have guessed. 5k is only 3x the people who attended EAG London this year.

One reason to be lower than this per recruited EA is that you might think that the people who need to be recruited are systematically less valuable on average than the people who don't need to be. Possibly not a huge adjustment in any case, but worth considering. 

3Linch4mo
Should be 4999
Free-spending EA might be a big problem for optics and epistemics

Most EAs I've met over the years don't seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.

That said, I like options and norms that discourage fancy options that don't come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn't make sense to pay extra for a nice room.

I think I agree with this. I think if I look historically at my mistakes in spending money, there was very likely substantially more utility lost from spending too little money rather than spending too much money. 

To be more precise, most of my historical mistakes do not come from consciously thinking about time-money tradeoffs and choosing money instead of time ("oh I can Uber or take the bus to this event but Uber is expensive so I should take the bus instead") but from some money-expensive options not being in my explicit option set to prioritize i... (read more)

Effectiveness is a Conjunction of Multipliers

FWIW I think superlinear returns are plausible even for research problems with long timelines, I'd just guess that the returns are less superlinear, and that it's harder to increase the number of work hours for deep intellectual work. So I quite strongly agree with your original point.

Reviews of "Is power-seeking AI an existential risk?"

Very cool!

It would be convenient to have the specific questions that people give probabilities for (e.g. I think "timelines" refers to the year 2070?)

Effectiveness is a Conjunction of Multipliers

Similarly, speed matters in quant trading not primarily because of real-world influence on the markets, but because you're competing for speed with other traders.

1Mathieu Putz4mo
Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable. I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.
Effectiveness is a Conjunction of Multipliers

The examples you give fit my notion of speed - you're trying to make things happen faster than the people with whom you're competing for seniority/reputation.

Similarly, speed matters in quant trading not primarily because of real-world influence on the markets, but because you're competing for speed with other traders.

Effectiveness is a Conjunction of Multipliers

A key question for whether there are strongly superlinear returns seems to be the speed at which reality moves. For quant trading and crypto exchanges in particular, this effect seems really strong, and FTX's speed is arguably part of why it was so successful. This likely also applies to the early stages of a novel pandemic, or AI crunch time. In other areas (perhaps, research that's mainly useful for long AI timelines), it may apply less strongly.

2Mathieu Putz4mo
I agree that superlinearity is way more pronounced in some cases than in others. However, I still think there can be some superlinear terms for things that aren't inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.
The Vultures Are Circling

I sent a DM to the author asking if they could share examples. If you know of any, please DM me!

The Vultures Are Circling

Atlas Fellowship cofounder here. Just saw this article. Currently running a workshop, so may get back with a response in a few days.

For now, I wanted to point out that the $50,000 scholarship is for educational purposes only. (If it says otherwise anywhere, let me know.)

the $50,000 scholarship is for educational purposes only

That's not how I understood the scholarship when I read the information on the website.

 

The FAQ says

Scholarship money should be treated as “professional development funding” for award winners. This means the funds could be spent on things like professional travel, textbooks, technology, college tuition, supplementing unpaid internships, and more.

and

Once the student turns 18, they have two options:

  1. Submit an award disbursement request every year, indicating the amount of scholarship the student wou
... (read more)

Don't people have the option to take it as a lump sum? If that is the case, presumably if they are willing to game the system to get the money they will not be particularly persuaded by a clear instruction to "only spend it on education".

Twitter-length responses to 24 AI alignment arguments

Hmm yeah, good point; I assign [EDIT: fairly but not very] low credence to takeoffs that fast.

2RobBensinger2mo
Why? Seems to me like a thing that's hard to be confident about. Misaligned AGI will want to kill humans because we're potential threats (e.g., we could build a rival AGI), and because we're using matter and burning calories that could be put to other uses. It would also want to use the resources that we depend on to survive (e.g., food, air, water, sunlight). I don't understand the logic of fixating on exactly which of these reasons is most mentally salient to the AGI at the time it kills us.
Twitter-length responses to 24 AI alignment arguments

My point is that the immediate cause of death for humans will most likely not be that the AI wants to use human atoms in service of its goals, but that the AI wants to use the atoms that make up survival-relevant infrastructure to build something, and humans die as a result of that (and their atoms may later be used for something else). Perhaps a practically irrelevant nitpick, but I think this mistake can make AI risk worries less credible among some people (including myself).

It depends on takeoff speed. I've always imagined the "atoms.." thing in the context of a fast takeoff, where, say, the Earth is converted to computronium by nanobot swarms / grey goo in a matter of hours.

Twitter-length responses to 24 AI alignment arguments

I always cringe at the "humans are made of atoms that can be used as raw materials" point. An AI might kill all humans by disrupting their food production or other surivival-relevant systems, or deliberately kill them because they're potential threats (as mentioned above). But in terms of raw materials, most atoms are vastly easier to access than humans who can defend themselves, or run away, or similar.

Edit: I want to partly but not fully retract this comment, I think the default framing is missing something important, but also the "raw materials" point i... (read more)

Any atom that isn't being used in service of the AI's goal could instead be used in service of the AI's goal. Which particular atoms are easiest to access isn't relevant; it will just use all of them.

Announcing the Future Fund

I think Open Phil is actually doing the things you say they aren't doing. I think the main value-add of the Future Fund is additional grantmaking capacity and experimenting with different mechanisms (such as prizes and regranting pools).

6Jack Malde5mo
OK you may well be right, although I'm not sure Open Phil is as public about specific project ideas they want to be funded as Future Fund is? The other thing I'd say is that I don't actually think the cause areas between Open Phil and Future Fund overlap that much. Open Phil isn't solely longtermist and the main overlap between the two orgs seems to be AI and biosecurity.
Some thoughts on vegetarianism and veganism

Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now.

I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.

I'll tap out of the conversation now – don't feel like I have time to discuss further, sorry.

Some thoughts on vegetarianism and veganism

I don't think your third paragraph describes what I think / feel. It's more the other way around: I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don't eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wron... (read more)

3MichaelStJules6mo
I think it's plausible that being more deliberate in your diet to avoid the lowest welfare options could have a lot of the same impact on your own perceptions of animals. That being said, eating meat again would feel wrong to me, too. I specifically work on animal welfare. How can I eat those I'm trying to help? Similarly, I'm a bit suspicious of non-vegetarian veterinarians helping farmed animals. If working with farmed animals doesn't turn them away from meat, do they actually have their best interests at heart? What kind of doctor eats their patients? And maybe this logic extends to those weighing the interests of nonhuman animals or similarly minded artificial sentience in the future.
1Jack R6mo
That makes sense, yeah. And I could see this being costly enough such that it's best to continue avoiding meat.
EAF’s ballot initiative doubled Zurich’s development aid

Update: The basic rights for primates initiative was rejected, with 74.7% of voters against, 25.3% in favor.

Political initiative: Fundamental rights for primates

Update: This initiative was rejected, with 74.7% of voters against, 25.3% in favor.

We need more nuance regarding funding gaps

I disagree with this, not because we're particularly good at predicting which projects get successful, but because funders have been very generous with money lately (e.g., EAIF and LTFF have had pretty high acceptance rates), which makes it pretty unlikely that we'll miss one.

4Linch6mo
Oh yeah that's a really good point.
Some thoughts on vegetarianism and veganism

… except that not kicking people also saves time, whereas entirely avoiding animal products often involves significant hassle and time cost?

I suspect there are examples of things EAs do out of consideration for other humans that are just as costly, and they justify them on the grounds that this comes out of their "fuzzies" budget. e.g. Investing in serious romantic or familial relationships. I'm personally rather skeptical that I would spend any time and money saved by being non-vegan on altruistically important things, even if I wanted to. (Plus there is Nikola's point that if you already do care a lot about animals, the emotional cost of acting in a way that financially supports factory farming could be nontrivial.)

We need more nuance regarding funding gaps

I meant pretty much any of the possible interpretations of goodness, though not the literal interpretation of 'proposal'.

8Linch6mo
Thanks! I don't have strong evidence for this, but I definitely have a strong prior that we'll miss good grants, evaluated from the POV of benevolent impartial agents with perfect clairvoyance. The world just doesn't seem that fundamentally predictable to me.
Some thoughts on vegetarianism and veganism

Quickly written:

  • Nobody actively wants factory farming to happen, but it's the cheapest way to get something we want (i.e. meat), and we've built a system where it's really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
  • In the context of AI, suffering subroutines might be an example of that.
  • Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is,
... (read more)
9Stefan_Schubert6mo
Thanks. Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI. Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion). Regarding the third point - not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.
We need more nuance regarding funding gaps

LTFF, OP, SFF, FTXF etc. are all keen to fund bio stuff. If they don't do so in practice, it's because nobody pitches them with good proposals, not because they're not interested. Also, some of the bio grants aren't public.

If they don't do so in practice, it's because nobody pitches them with good proposals, not because they're not interested

I think "good proposals" should be disambiguated a bit here. There's a range of possible options* for what you might mean. 

*eg 

  • "good"ness from the POV of the specific funders vs an ideal rational observer vs an ominiscient entity with perfect clairvoyance, 
  • goodness as defined by naive EV vs (e.g) including vetting costs
  • "proposal" defined literally vs meaning the whole package including e.g. founder quality, etc, etc. 
... (read more)
Some thoughts on vegetarianism and veganism

Yeah, the "avoid interncontinental flights" was intended as something clearly ineffective that people still do – i.e. as an example of something that seems way too costly and should be discouraged. So I fully agree with you we should encourage ~0% of that range for EAs.

My point is that avoiding animal products is substantially more cost-effective than those interventions, but I'm still not sure whether it meets the threshold for EA activity, but it might. It's been a while since I looked into the exact numbers, but I think you can avert substantial time sp... (read more)

We need more nuance regarding funding gaps

What do the numbers in brackets mean? Including this information more prominently would make this post a lot more skim-able.

The categories for biorisk look very wrong to me (I don't think there's a funding gap).

7Joey6mo
Keen to hear about any data on this topic, James is right it is the number of ~EA funders with unique perspectives.
1James Ozden6mo
Maybe Joey can clear it up but I believe it's the number of funders in that bucket, as an indication of funder diversity.
Some thoughts on vegetarianism and veganism

Copying a comment I once wrote:

  • eating veg sits somewhere between "avoid intercontinental flights" and "donate to effective charities" in terms of expected impact, and I'm not sure where to draw the line between "altruistic actions that seem way too costly and should be discouraged" and "altruistic actions that seem a reasonable early step in one's EA journey and should be encouraged"

  • Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory fa

... (read more)
6Jack R6mo
I'm not very convinced of your second point (though I could be -- curious to hear why it feels true for you). I don't currently see why you think the bolded words instead of: "it seems harder to see the importance of future beings or think correctly about the badness of existential risk while wasting time eating non-meat" It feels like a universally compelling argument, or at least, I don't see where you think the argument should stop applying on a spectrum between something like "it seems hard to think correctly about x-risk without having a career in it" and "it seems hard to think correctly about the importance of all sentient beings while squashing dust mites every time you sleep [https://www.youtube.com/watch?v=SVJXvzVQbGU]" ETA: I imagine you wrote the bolded words because they feel true to you i.e. that eating meat might cause you to value drift or have worse epistemics in certain ways such that it's worth staying vegan. I am curious about what explicable arguments that feeling (if you have it) might be tracking (e.g. in case they cause me to stay vegan).
6Stefan_Schubert6mo
Could you expand on what effects eating meat would have on thinking about s-risks and other AI stuff? What kinds of scenarios are you thinking of? My initial reaction is somewhat sceptical. I think these effects are hard to assess and could go either way. But it depends a bit on what mechanisms you have in mind.

eating veg sits somewhere between "avoid intercontinental flights" and "donate to effective charities" in terms of expected impact, and I'm not sure where to draw the line between "altruistic actions that seem way too costly and should be discouraged" and "altruistic actions that seem a reasonable early step in one's EA journey and should be encouraged"

I am very confused by this statement. I feel like we've generally universally agreed that we don't really encourage people as a community to take altruistic actions if we don't really think it competes with ... (read more)

RyanCarey's Shortform

No applications yet. In general, we rarely get the applications we ask/hope for; a reasonable default assumption is that nobody has been doing anything.

Effective Crypto | Future State

Thanks for the suggestions! I agree these would be improvements and we've been thinking along similar lines. We don't currently have the capacity to implement this and may only prioritize the project if there's another major bull run, but appreciate the concrete ideas!

4Ian Sagstetter7mo
Thanks Jonas! I'm currently taking a break from full-time work to focus on Web3 projects. Happy to invest some time in this area to prepare for another bull run. Let me know what you think.
EA Funds has appointed new fund managers

Yes, we still do have that intention. We're currently thinly staffed, so I think it'll still take a while for us to publish a polished policy. For now, here's the current beta version of our internal Conflict of Interest policy:

Conflict of interest policy

We are still working on the conflict of interest policy. For now, please stick to the following:

  • Please follow these two high-level principles:
    • 1. Avoid perceived or actual conflicts of interest, as this can impair decision-making and permanently damage donor trust.
    • 2. Ensure all relevant information still ge
... (read more)
RyanCarey's Shortform

I'm interested in funding someone with a reasonable track record to work on this (if WikiHow permits funding). You can submit a very quick-and-dirty funding application here

8Jackson Wagner7mo
Have you had any bites on this project yet? I just had the misfortune of encountering the WikiHow entries for "How to Choose a Charity to Support [https://www.wikihow.com/Choose-a-Charity-to-Support]" and "How to Donate to Charities Wisely [https://www.wikihow.com/Donate-to-Charities-Wisely]", and "How to Improve the Lives of the Poor [https://www.wikihow.com/Help-Improve-the-Lives-of-the-Poor]", which naturally have no mention of anything remotely EA-adjacent (like considering the impact/effectiveness of your donations or donating to health interventions in poor countries), instead featuring gems like: * "Do an inventory of what's important to you.... Maybe you remember having the music program canceled at your school as a child." * A three-way breakdown of ways to donate to charity including "donate money", "donate time", and then the incongruously specific/macabre "Donate blood or organs." * I did appreciate the off-hand mention that "putting a student through grade school in the United States can cost upwards of $100,000. In some developing countries, you can save about 30 lives for the same amount," which hilariously is not followed up on whatsoever by the article. * A truly obsessive focus on looking up charities' detailed tax records and contacting them, etc, to make sure they are not literal scams. I'm not sure that creating new Wikihow entries about donations (or career choice) will be super high-impact. We'll be competing with the existing articles, which aren't going to go away just because we write our own. And doesn't everybody know [https://thezvi.wordpress.com/2019/07/02/everybody-knows/] that WikiHow is not a reliable source of good advice -- at least everybody from the smart, young, ambitious demographic that EA is most eager to target? Still, it would be easy to produce some wikihow articles just by copy-pasting and lightly reformatting existing intro-to-EA content. I think I'm a little to busy to do this project myself r
Democratising Risk - or how EA deals with critics

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

Long-Term Future Fund: May 2021 grant recommendations

Based on the publicly available information on the SFF website, I guess the answer is 'no', but not sure.

AGI Safety Fundamentals curriculum and application

I vaguely remember seeing a website for that program, but can't find the link – is this post the most up-to-date resource, or is the website more up to date, and if the latter, do you have a link? Thank you!

4richard_ngo8mo
This post (plus the linked curriculum) is the most up-to-date resource. There's also this website [https://www.eacambridge.org/agi-safety-fundamentals], but it's basically just a (less-up-to-date) version of the curriculum.
You can now apply to EA Funds anytime! (LTFF & EAIF only)

But FYI the fund pages still refer to the Feb/Jul/Nov grant schedule, so probably worth updating that when you have a chance.

Thanks, fixed!

Re: the balances on the fund web pages, it looks like the “fund payout” numbers only reflect grants that have been reported but not the interim grants since the last report, is that correct?

Correct.

Do the fund balances being displayed also exclude these unreported grants (which would lead to higher cash balances being displayed than the funds currently have available)?

No, they don't.

I can see that this is confusing. We ... (read more)

1AnonymousEAForumAccount8mo
Thanks Jonas!
You can now apply to EA Funds anytime! (LTFF & EAIF only)

No, the EAIF and LTFF now have rolling applications: https://forum.effectivealtruism.org/posts/oz4ZWh6xpgFheJror/you-can-now-apply-to-ea-funds-anytime-ltff-and-eaif-only

There have been dozens of grants made since the last published reports, much more than over the same period last year, both in numbers and dollar amounts.

Both LTFF and EAIF have received large amounts of funding recently, some of which has already been processed, and some of which hasn't.

1AnonymousEAForumAccount8mo
Thanks for clarifying Jonas. Glad to hear the funds have been making regular grants (which to me is much more important than whether they follow a specific schedule). But FYI the fund pages still refer to the Feb/Jul/Nov grant schedule, so probably worth updating that when you have a chance. Re: the balances on the fund web pages, it looks like the “fund payout” numbers only reflect grants that have been reported but not the interim grants since the last report, is that correct? Do the fund balances being displayed also exclude these unreported grants (which would lead to higher cash balances being displayed than the funds currently have available)? Just trying to make sure I understand what the numbers on the funds’ pages are meant to represent.
EA megaprojects continued

I didn't get a response so far and talked to some other grantmakers who didn't seem to know you, either – so I'm confused what's going on here.

1Jan-WillemvanPutten7mo
Hahah same here Jonas! Let me know if you know more
Load More