People hoping to do the most good with their giving face a tradeoff between (a) giving now and (b) investing to give more later. If the giver doesn't expect to learn anything over time about the best places to give, the question of when to give boils down roughly to the question of whether the interest rate at which they could invest exceeds the "charitable discount rate" at which doing a unit of good is growing more costly.
For me, and I think for pretty much any (utilitarian-leaning) aspiring effective altruist right now, the "learning" consideration should swamp all else. We currently lack, as far as I can tell, any good way to forecast the long-run impacts of our actions, and in that lack I think all of our attempts at charity are about as likely to do harm as to do good. But more people are starting to think about the problem seriously, and there is at least a sliver of hope that progress will be made over the coming years or decades. In the meantime, there is nothing to do but invest and wait—or, perhaps, fund better prioritization research.
But if your goal as a philanthropist is to spend money at time t to increase welfare around time t, and more complex funding opportunities are not under consideration, it seems to me that there's a good a priori reason to think it's usually better to give later (even if we ignore the cluelessness/learning issue) which I haven't seen expressed elsewhere. The reason is that market interest rates are set in part by people's "rates of pure time preference" (RPTP). If investors were perfectly patient—i.e. if they sought to maximize the sum of their own non-discounted welfare over the course of their lives—an equilibrium interest rate of 7% would mean that a unit of welfare next year was projected to cost about 7% more, in dollar terms, than a unit of welfare this year. But investors are not perfectly patient; they discount their future welfare at some positive rate. If that rate is, say, 2%, then the indifference point of 7% returns implies that the rate at which the cost of welfare is rising (R) is only 5%. Philanthropists with zero RPTP can therefore do 2% more good for others by investing at 7% and giving next year.
This is not a small concern. Some recent literature on discounting, for instance, has observed that the "near-zero social discount rate" reasoning usually used to justify extensive action against long-term risks like climate change also implies a need to promote investment in general, with optimal capital gains subsidies of as much as 50% (financed by correspondingly high taxes on present consumption).
Perhaps the cost of welfare is growing more quickly for some populations than others, and perhaps some of those populations are currently top contenders for our charity. For instance, perhaps the cost of helping the world's poorest is rising more quickly than 7% per year, as is sometimes claimed, due to the particularly fast progress being made in global development. (Scott Alexander reports Elie Hassenfeld basically making this point a few years ago.) If this is true, then indeed, we would do less good giving next year than giving this year.
But this one-year relationship must be temporary. Over the course of a long future, the rate of increase in the cost of producing a unit of welfare as efficiently as possible cannot, on average, exceed R. Otherwise, the most efficient way to good would eventually be more costly than one particular way to good--just giving money to ordinary investors for their own consumption. And since the long-run average rate of increase in the cost of welfare is bounded above by R ("5%"), investing at R + RPTP ("7%") must eventually result in an endowment able to buy more welfare than the endowment we started with.
At first glance, this leads to the paradoxical conclusion that we should invest forever and never give. The resolution of this paradox is that one way or another, the opportunity to invest at a rate R + RPTP will eventually not be available. There are various reasons an endowment could come to lose this opportunity. It could face idiosyncratic constraints (the money is going to be seized in a few years). Or, with increasing wealth, investment opportunities could dry up in general (depending on how utility decreases with consumption, people might grow so rich that the only projects worth investing in would be those that earned extremely high returns, and eventually fewer such projects might exist than the size of the endowment). Or investment opportunities could vanish for other reasons, such as the impending end of the world. In the last case, if the other constraints don't hold, the best thing to do is to invest forever, until one massive act of charity on the last day.
But under ordinary circumstances, to a first approximation, if a philanthropist's plan is to spend his money at some time t to increase consumption-based welfare as efficiently as possible at time t—and if these considerations are not swamped by others beyond the scope of this post, such as the risk of value drift—then it seems the philanthropist should wait.
[Edited 4 Nov. 2018  to include the link to Elie Hassenfeld making the point about vanishing giving opportunities in global poverty, and  to weaken the last sentence so that it emphasizes the limited scope of this post. Edited 7 Nov. 2018  to point out this limited scope earlier on, so that it's clear that this argument doesn't apply to research funding.]