All of SethBaum's Comments + Replies

I had a great overall experience at the conference. As a speaker, everything went smoothly for me. The organizers were great and I would definitely recommend them for future events. I would also recommend people attend future EAGxVirtual events.

It's important to emphasize the overall value of remove events. Advantages include reduced greenhouse gas emissions (especially from air travel), lower cost, less time intensive, less time away from family, COVID-safe, no travel visa required (facilitates geographic diversity), and more. I talk about this in my rece... (read more)

Thanks for your thoughts on this. To briefly reply, I would disagree with the idea that renewables, electric cars, etc are a waste. As much as I might personally like to see more car-free urban design, it is still the case that renewables + electric cars can substantially reduce emissions. The point about ending fossil fuel subsidies is an important one, albeit with a caveat about political feasibility. To the extent that there can be political will to end these subsidies, it’s clearly a good thing to do, but finding the political will can be elusive.

1
Noah Scales
2y
Well, on cars in the US and China: * there's no plans that are real for car-free urban design * all cars take a lot of energy and GHG's and resources to make * we drive more than we need to * we drive more than we can sustain As far as oil/gas subsidies, I'd like to see governments tax the oil/gas companies but not disallow oil/gas exploration. We will need the fuel. I am thinking of what could work, not what's ideal. So, car-free urban design sounds great, it would work. Conservation will work, lowering emissions through renewables and electric cars won't,  because: * building new infrastructure and manufacturing new vehicles will produce lots of GHG's  * we won't know where to put the infrastructure, because climate and other pressures will eat away the existing infrastructure and move people around, we'll have to patch up the infrastructure and keep on. * supply-chains will take such a hit that doing redundant transportation as we do with multiple cars per family won't happen, there'll be a sharp decline in car sales and use w/in a couple decades. * energy consumption will reduce because of unexpected shortages, and conservation will be required. It would be nice if it were proactive though, so less damaging to lifestyles and done better. * energy security will require portable energy-dense fuel and use of the existing infrastructure. We  will need oil and gas, and will just have to sip it when before we used to gulp it. * I imagine the rich will have their own cars and drive them on roads that are mostly empty except for buses, government, ride-sharing vehicles, and a few transport vehicles. And that's if the roads are maintained.  * There'll be less goods transport on roads, and less goods  transport overall. Trains might make even more of a contribution there though. Shipping might be stalled because of damaged ports and canals or impassible storms at sea (I only suspect that last one). Every market-driven change, because of the time frames inv

Thank you for your thoughtful comments.

To all: Let me just briefly add that I believe this to be a compelling perspective worth taking seriously. Anyone wishing to contact Morton can find his email address on his page at UNEP here.

A point that I hope comes across in that section and throughout the post is that a lot of decisions on what to do on climate change do not depend on how large the catastrophic risk is. There is a role for analysis of the risk, and I have linked to studies doing that analysis. However, for purposes of this post, my interest is in discussing the details of the constructive actions that can be taken to address the risk instead of getting bogged down in analysis of the risk itself.

A more detailed catastrophic risk analysis could be useful for things like evalu... (read more)

That's a good question, thanks. My understanding is that opportunities to address cement issues are more specialized; see e.g. this. It could be a worthy point of focus for people pursuing a career in climate change or other more extensive involvement, especially people with relevant skill sets/etc. The neglectedness of cement is a point in favor of work on it. (Ditto refrigerants.) However, those who aren't pursuing something like this are unlikely to encounter cement opportunities. I could be wrong about this - I'm not a cement expert myself - though I c... (read more)

Thanks for your comments. Some replies:

On renewables, coal, etc. - to me, the bottom line is the value of an "all of the above" approach to reducing emissions. Where there are opportunities to advance renewables or even nuclear, great. Where there are opportunities to reduce energy consumption, also great. The potential for renewables is amazing but we can't count on it solving the entire emissions problem in a sufficiently timely fashion.

On water shortages, this is not my expertise. There is a lot of work on climate change & water, but it would not su... (read more)

1
Noah Scales
2y
I think we should preserve existing infrastructure and direct all new infrastructure spending toward conservation rather than trying to replace the gas/oil infrastructure at all in the short-term. Society could still do things like add geothermal to homes or increase train use or reduce car fleets or deploy passive solar heating/lighting or  even develop coal for energy.  Countries could end fossil fuel subsidies and take more control over oil/gas corporations. In the 1970's we understood that domestic energy supplies were a national security issue, and that conservation was a way out of oil wars. We didn't take that path but we still can. Spending on renewables actually seems too speculative to me, ignoring risks of changes in weather, climate, available resources, energy demand, and infrastructure location requirements.    Electrifying cars seems like a waste to me. There'll be no one-to-one substitution of an electric fleet for a gas fleet. The total fleet counts will have to radically decline, and then it will be obvious we should have kept the fuel infrastructure we had. I'm not talking about the business-as-usual scenario for oil/gas. I also don't agree with the IPCC estimates of the relative value of conservation.  Most of my thinking is not about creating a vision of an ideal, its about how to negotiate through a very difficult future.  As far as water availability, it might be a frustrating obstacle to most required societal changes. There are no reassuring models of freshwater availability changes in the next 30 years. They are all extreme as far as I know. But calculating what the impacts are is what's missing from common discourse. People will move. How many? Conflicts might start. What kind? How dangerous? Etc.

Thanks for the question.

Asteroid risk probably has the most cooperation and the most transparent communication. Asteroid risk is notable for its high degree of agreement: all parties around the world agree that it would be bad for Earth to get hit by a large rock, and that there should be astronomy to detect nearby asteroids, and that if a large Earthbound asteroid is detected, there should be some sort of mission to deflect it away from Earth. There are some points of disagreement, such as on the use of nuclear explosives for asteroid deflection, but this... (read more)

The best way to answer this question is probably in terms of GCRI's three major areas of activity: research, outreach, and community support, plus the fourth item of organization development.

GCRI's ultimate goal is to reduce global catastrophic risk. Everything we do is oriented toward that end. Our research develops ideas and reduces uncertainty about how best to reduce global catastrophic risk. Our outreach gets those ideas to important decision-makers and helps us understand what research questions decision-makers would benefit from answers to. Our comm... (read more)

I regret that I don't have a good answer to this question. Global catastrophic risk doesn't have much in the way of statistics, due to the lack of prior global catastrophes. (Which is a good thing!)

There are some statistics on the amount of work being done on global catastrophic risk. For that, I would recommend the paper Accumulating evidence using crowdsourcing and machine learning: A living bibliography about existential risk and global catastrophic risk by Gorm Shackelford and colleagues at CSER. It finds that there is a significant body of work on the... (read more)

Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I'll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it's also the case that I try to focus my research on the most important open questions.

For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts... (read more)

Thanks for the question. To summarize, I don't have a clear ranking of the risks, and I don't think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one's background and other factors.

First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.

Se... (read more)

Interesting question, thanks. To summarize my answer: I believe nuclear weapons have the largest opportunities for a few select individuals to make an impact; climate change has the smallest opportunities; and AI, asteroids, and biosecurity are somewhere in between.

First, please note that I am answering this question without regard for the magnitude of the risks. One risk might have larger opportunities for an individual to make an impact on because it's a much larger risk. However, accounting for that turns this into a question about which risks are large... (read more)

That's an interesting question, thanks. To summarize my remarks below: AI and climate change are more market-oriented, asteroids and nuclear weapons are more government-oriented, biosecurity is a mix of both, and philanthropy has a role everywhere.

First, market solutions will be limited for all global catastrophic risks because the risks inevitably involve major externalities. The benefits of reducing global catastrophic risks go to people all over the world and future generations. Markets aren't set up to handle that sort of value.

That said, there can sti... (read more)

3
Madhav Malhotra
2y
This is a very comprehensive answer! I especially appreciate your summary up top and you linking to sources. Thank you :-)

Hi everyone. Thanks for all the questions so far. I'll be online for most of the day today and I'll try to get to as many of your questions as I can.

Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it's important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.

Glad to hear that you're interested in these topics. It's a good area to pursue work in.

Regarding how to get involved, to a large extent my advice is just general advice for getting involved in any area: study, network, and pursue opportunities as you get them. The networking can often be the limiting factor for people new to something. I would keep an eye on fellowship programs, such as the ones listed here. One of those is the GCRI Advising and Collaboration Program, which to a large extent exists to provide an entry point for people interested in these ... (read more)

Thanks for your questions. In reply:

I would not ever expect governments to respond to catastrophic risks to a degree that I (for one) think is proportionate to the importance of the risks. This is because I would rate the risks as being more important than most other people would. There are a variety of reasons for this, including the intergenerational nature of it, and the global nature, and some psychological and institutional factors. Jonathan Wiener's paper The Tragedy of the Uncommons is a good read on this.

That said, I do see potential for government... (read more)

1
Ben Stewart
2y
Thanks!

Thank you for these thoughtful comments.

Regarding exploration vs. exploitation:

First, my understanding of what you mean by this is that exploration involves taking time to learn more about an area, whereas exploitation involves focusing on trying to make an impact within that area. On one hand, it can be important to learn more in order to better orient oneself in the right direction. On the other hand, spending too much time on exploration can mean not making much of an impact. My apologies if this is not what you intended.

There often is a need for balanc... (read more)

Thanks for sharing this - looks like good work.

My commendations on another detailed and thoughtful review. A few reactions (my views, not GCRI's):

The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves.

Actually, a lot of scientists & engineers in nuclear power are not happy about the strict regulations on nuclear power. Note, I've been exposed to this because my father worked as an engineer in the nuclear power industry, and I've had other interact... (read more)

Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.

All good to know, thanks.

I'll briefly note that I am currently working on a more extended discussion of policy outreach suitable for posting online, possibly on this site, that is oriented toward improving the understanding of people in the EA-LTF-GCR community. It's not certain I'll have the chance to complete given my other responsibilities it but hopefully I will.

Also if it would help I can provide suggestions of people at other organizations who can give perspectives on various aspects of GCRI's work. We could follow up privately about that.

I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same.

That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.

The argument seemed to mostly consists of a few con
... (read more)
8
Raemon
5y
Just wanted to make a quick note that I also felt the "overview" style posts aren't very useful to me (since they mostly encapsulate things I already had thought about) At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
I do view this publishing of the LTF-responses as part of an iterative process.

That makes sense. I might suggest making this clear to other applicants. It was not obvious to me.

Oliver Habryka's comments raise some important issues, concerns, and ideas for future directions. I elaborate on these below. First, I would like to express my appreciation for his writing these comments and making them available for public discussion. Doing this on top of the reviews themselves strikes me as quite a lot of work, but also very valuable for advancing grant-making and activity on the long-term future.

My understanding of Oliver's comments is that while he found GCRI's research to be of a high intellectual quality, he did not ha... (read more)

8
Habryka
5y
I want to make sure that there isn't any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund. I don't currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.
8
Habryka
5y
(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to) Of those, I had read "Long-term trajectories of human civilization" and "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation). I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum). I also generally found the arguments in them not particularly compelling (in particular I found the arguments in "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that). I highlighted the "A model for the probability of nuclear war" not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don't think that applies to any of the three papers you linked above. I don't currently have a great operationalization of what I mean by
8
Habryka
5y
Thanks for posting the response! Some short clarifications: My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn't update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable). I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.

Thanks for this conversation. Here are a few comments.

Regarding the Ukraine crisis and the current NATO-Russia situation, I think Max Fisher at Vox is right to raise the issue as he has, with an excellent mix of insider perspectives. There should be more effort like this, in particular to understand Russia's viewpoint. For more on this topic I recommend recent work by Rajan Menon [http://nationalinterest.org/feature/newsflash-america-ukraine-cannot-afford-war-russia-13137], [http://nationalinterest.org/feature/avoiding-new-cuban-missile-crisis-ukraine-1294... (read more)

I see the logic here, but I would hesitate to treat it as universally applicable. Under some circumstances, more centralized structrues can outperform. For example if China or Wal-Mart decide to reduce greenhouse gas emissions, then you can get a lot more than if the US or the corner store decide to, because the latter are more decentralized. That's for avoiding catastrophes. For surviving them, sometimes you can get similar effects. However, local self-sufficiency can be really important. We argued this in http://sethbaum.com/ac/2013_AdaptationRecovery.ht... (read more)

OK, I'm wrapping up for the evening. Thank you all for these great questions and discussion. And thanks again to Ryan Carey for organizing.

I'll check back in tomorrow morning and try to answer any new questions that show up.

2
RyanCarey
9y
Thanks very much for giving some of your time to discuss this important topic with all of us! It's great to build a stronger connection between effective altruists and GCRI and to get a better idea of how you're thinking about analysing and predicting risks. Good luck with GCRI and I look forward to hearing how GCRI comes along with its new, research-focussed direction.
1
Randomized, Controlled
9y
Thanks again for your time, comments and being a nucleation point for conversation!

For what it's worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It's just easier for me to order the salad.

I mainly eat veg foods too. It reduces environmental problems, which helps on gcr/xrisk. And i... (read more)

I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a 'creative technologist' and now I'm very happy as a programmer, trying to learn as much math as I can in my free time.

Very interesting!

It looks like a good part of the conversation is starting to revolve around influencing policy. I think there's some big macro social/cultural forces that have been pushing people to be apolitical for a while now. The most interesting reform effort I've heard about lately is Lawrence Lessig's anti-PAC in the US. How can we effectively level our political games up?

I agree there are macro factors pushing people away from policy. However, that can actually increase the effectiveness of policy engagement: less competition.

A great way to level up in politics... (read more)

Total mixed bag of questions, feel free to answer any/all. Apologies if you've already written on the subject elsewhere; feel free to just link if so.

No worries.

What is your current marginal project(s)? How much will they cost, and what's the expected output (if they get funded)

We're currently fundraising in particular for integrated assessment, http://gcrinstitute.org/integrated-assessment. Most institutional funders have programs on only one risk at a time. We're patching integrated assessment work from other projects, but hope to get more dedicat... (read more)

One of the major obstacles to combating Global Warming at the governmental level in America is the large financial investment that the fossil fuel industry makes to politicians in return for tens of billions of dollars in government assistance every year (widely varied numbers depending on how one calculates the incentives and tax breaks and money for research and so on). There seems to me to be only one way to change the current corrupt money for control of politicians process, and that is to demand that all political donations be made anonymously, given

... (read more)

oops I think I answered this question up above. I think this is the link: http://effective-altruism.com/ea/fv/i_am_seth_baum_ama/2v9

What funding will GCRI require over the coming year to maintain these activities?

GCRI has a small base of ongoing funding that keeps the doors open, so to speak, except that we don't have any actual doors. I will say, not having an office space really lowers costs!

The important thing is that GCRI is in an excellent place to convert additional funding into additional productivity, mainly by freeing up additional person-hours of work.

Then I guess you don't think it's plausible that we can't expect to make many permanent gains. Why?

I'll have to look at that link later, but briefly, I do think it can be possible to make some permanent gains, but there seem to be significantly more opportunities to avoid permanent losses. That said, I do not wish to dismiss the possibility of permanent gains, and am very much willing to consider them as of potential comparable significance.

Here's one question: which risks are you most concerned about?

I shy away from ranking risks, for several reasons:

  • The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they're not really distinct risks.

  • Ultimately what's important to rank is not the risks thems

... (read more)

What are GCRI's current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.

We have an active synbio project modeling the risk and characterizing risk reduction opportunities, sponsored by the US Dept of Homeland Security: http://gcrinstitute.org/dhs-emerging-technologies-project.

I agree that synbio is an under-invested-in area across the gcr community. Ditto for other bio risks. GCRI is working to correct that, as is CSER.

Also, with regard to the research project on altruism, my shoo

... (read more)
0
Randomized, Controlled
9y
For what it's worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It's just easier for me to order the salad. My interest in x-risk comes from wanting to work on big/serious problems. I can't think of a bigger one than x-risk.
2
SethBaum
9y
I shy away from ranking risks, for several reasons: * The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they're not really distinct risks. * Ultimately what's important to rank is not the risks themselves, but the actions we can take to reduce them. We may sometimes have better opportunities to reduce smaller risks. For example, maybe some astronomers should work on asteroid risks even though this is a relatively low probability risk. Also, the answer to this question varies by time period. For, say, the next 12 months, nuclear war and pandemics are probably the biggest risks. For the next 50-100 years, we need to worry about these plus a mix of environmental and technological risks. There's the classic Margaret Mead quote, "Never underestimate the power of a small group of committed people to change the world. In fact, it is the only thing that ever has." There's a lot of truth to this, and I think the EA community is well on its way to being another case in point. That is as long as you don't slack off! :) That said, I keep an eye on a mix of politicians, other government officials, researchers, activists, celebrities, journalists, philanthropists, entrepreneurs, and probably a few others. They all play significant roles and it's good to be able to work with all of them.

thank you for your time and work!

You're welcome!

If I wanted to work at GCRI or a similar think-tank/institution, what skills would make me most valuable?

Well, I regret that GCRI doesn't have the funds to be hiring right now. Also, I can't speak for other think tanks. GCRI runs a fairly unique operation. But I can say a bit on what we look for in people we work with.

Some important things to have for GCRI include: (1) a general understanding of gcr/xrisk issues, for example by reading research from GCRI, FHI, and our colleagues; (2) deep familiarity w... (read more)

1
Randomized, Controlled
9y
I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a 'creative technologist' and now I'm very happy as a programmer, trying to learn as much math as I can in my free time.

Thanks Ryan! And thanks again for organizing.

My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?

This is a really, really important question. In a sense, it all comes down to this. Otherwise there's not much point in doing ... (read more)

Hi Ales,

Are you coordinating with FLI and FHI to have some division of labor?

We are in regular contact with both FLI & FHI. FHI is more philosophical than GCRI. The most basic division of labor there is for FHI to develop fundamental theory and GCRI to make the ideas more applied. But this is a bit of a simplication, and the coordination there is informal. With FLI, I can't yet point to any conceptual division of labor, but we're certainly in touch. Actually I was just spending time with Max Tegmark over the weekend in NYC, and we had some nice con... (read more)

what kind of researchers do you think are needed most at GCRI?

Right now, I would say researchers who can do detailed risk analysis similar to what we did in our inadvertent nuclear war paper: http://sethbaum.com/ac/2013_NuclearWar.html. The ability to work across multiple risks is extremely helpful. Our big missing piece has been on biosecurity risks. However, we have a new affiliate Gary Ackerman who is helping out with that. Also I'm participating in a biosecurity fellowship program that will also help. But we could still use more on biosecurity. That... (read more)

Good questions!

Of all the arguments you've heard for de-prioritizing GCR reduction, which do you find most convincing?

The only plausible argument I can imagine for de-prioritizing GCR reduction is if there are other activities out there that can offer permanent expected gains that are comparably large as the permanent expected losses from GCRs. Nick Beckstead puts this well in his dissertation discussion of far future trajectories, or the concept of "existential hope" from Owen Cotton-Barratt & Toby Ord. But in practical terms the bulk of... (read more)

2
Alexander
9y
Then I guess you don't think it's plausible that we can't expect to make many permanent gains. Why?
1
RyanCarey
9y
What funding will GCRI require over the coming year to maintain these activities?