All of Halffull's Comments + Replies

This just seems like you're taking on one specific worldview and holding every other worldview up to it to see how it compares.

Of course this is an inherent problem with worldview diversification, how to define what counts as a worldview and how to choose between them.

But still intuitively if your meta-wolrdview screens out the vast majority of real life views that seems undesirable. The meta-worldview that coherency matters is impotant but should be balanced with other meta worldviews, such as that what matters is how many people hold a worldview, or how much harmony it creates

why do you think that the worldviews need strong philosophical justification? it seems like this may leave out the vast majority of worldviews.

4
Richard Y Chappell
1mo
It's always better for a view to be justified than to be unjustified? (Makes it more likely to be true, more likely to be what you would accept on further / idealized reflection, etc.) The vast majority of worldviews do not warrant our assent. Worldview diversification is a way of dealing with the sense that there is more than one that is plausibly well-justified, and warrants our taking it "into account" in our prioritization decisions. But there should not be any temptation to extend this to every possible worldview. (At the limit: some are outright bad or evil. More moderately: others simply have very little going for them, and would not be worth the opportunity costs.)

I think thoughtleader sometimes means "has thoughts at the leading edge" and sometimes mean "leads the thoughts of the herd on a subject" and that there is sometimes a deliberate ambiguity between the two.

one values humans 10-100x as much

 

This seems quite low, at least from a perspective of revelead preferences. If one indeed rejects unitarism, I suspect that the actual willingness to pay is something like 1000x - 10,000x to prevent the death of an animal vs. a human.  

Revealed preference is a good way to get a handle on what people value, but its normative foundation is strongest when the tradeoff is internal to people. Eg when we value lives vs income, we would want to use people's revealed preference for how they trade those off because those people are the most affected by our decisions and we want to incorporate their preferences. That normative foundation doesn't really apply to animal welfare where the trade-offs are between people and animals. You may as well use animals revealed preferences for saving humans (ie not at all) and conclude that humans have no worth; it would be nonsensical.

Also, if we defer to people's revealed preferences, we should dramatically discount the lives and welfare of foreigners. I'd guess that Open Philanthropy, being American-funded, would need to reallocate much or most of its global health and development grantmaking to American-focused work, or to global catastrophic risks.

EDIT: For those interested, there's some literature on valuing foreign lives, e.g. https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q="valuing+foreign+lives"+OR+"foreign+life+valuation"

But isn't the relevant harm here animal suffering rather than animal death?  It would seem pretty awful to prefer that an animal suffer torturous agony rather than a human suffer a mild (1000x less bad) papercut.

I think that's basically right, but also rejecting unitarianism and discounting other animals through this seems to me like saying the interests of some humans matter less in themselves (ignoring instrumental reasons) just because of their race, gender or intelligence, which is very objectionable.

People discount other animals because they're speciesist in this way, although also for instrumental reasons.

The executive summary is entirely hallucinated.

"To what extent is money important to you?" and found that was much more important than money itself: money has a much bigger effect on happiness if you *think* money is important (a


Or perhaps, you think money is important if it has a bigger effect on your happiness (based on e.g. environmental factors and genetic predispostion)? In other words, maybe these people are making correct predictions about how they work, rather than creating self-fulfilling prophecies?  It is at least worth considering that the causality goes this way.

AND it found people wh

... (read more)

I think it's also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.  

It's possible that EA has shaved  a couple  counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn't working on it.

Can you say more about which longtermist efforts you're referring to?

I think a case can be made, but I don't think it's an easy (or clear) case.

My current impression is that Yudkowsky & Bostrom's writings about AGI inspired the creation of OpenAI/DeepMind. And I believe FTX invested a lot in Anthropic and OP invested a little bit (in relative terms) into OpenAI. Since then, there have been capabilities advances and safety advances made by EAs, and I don't think it's particularly clear which outweighs.

It seems unclear to me what the sign of these effect... (read more)

I'd also add Vitalik Buterin to the list.

If you're going to have a meeting this short, isn't it better to e.g. send a message or email about this?  Having very short conversations like this means you've wasted a large slot of time on your EAG calendar that you could have used for different types of conversations that you can only do in person at EAG.

3
Yonatan Cale
2y
I agree! I try sending enough context in my initial message for the other person to give a response, or at least to decide if it's relevant or refer me to someone more relevant. Lizka scheduled with me anyway, probably because we spoke online and hardly ever met

It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.

4
Linch
2y
I agree it provides stronger challenges. I think I disagree with the other claims as presented, but the sentence is not detailed enough for me to really know if I actually disagree.

I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.

  1. The project fails to gain any traction or have any meaningful impact on the world.
  2. The project has an impact on the world, but despite intentions the impact is negative, neutral or too small to matter.
  3. The project has enough of a positive outcomes to matter.

In general, I'd say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elic... (read more)

5
Linch
2y
This will be a good argument if Musk built and populated Antarctica bunkers before space. 

Sure, but "already working on an EA project" doesn't mean you have an employer.

If it's an EA project and you need support, I'd apply to EA Funds, and tell FTX that you're interested and say you're still seeking funding. Even if they have the money, they also aren't throwing cash at anything that moves - and FTX isn't the best placed group to evaluate EA projects. And I'd note that EA funds also aren't particularly funding constrained - but if they were, it would make more sense for FTX to give them money instead of trying to evaluate projects and fund people directly.

 

Assuming you have an employer

I think the “Already working on EA jobs / projects that can be done from the Bahamas” is the answer here. To my read, this isn’t trying to fully fund someone’s work, but rather to incentivize someone to do the work from the Bahamas . If you were self-funding a project from savings, this doesn’t suddenly provide you a full salary, but it still probably looks very good as it could potentially eliminate your cash burn.

This is great! Curious what (if anything) you're doing to measure counterfactual impact.  Any sort of randomized trial involving e.g. following up with clients you didn't have the time to take on and measuring their change in productive hours compared to your clients?

1
lynettebye
3y
Unfortunately, I don't have an easy control group to do such a trial. I do my best to take on every client who I think is a great fit for me to help, so there isn't a non-coached group who is otherwise comparable. Additionally, as a for-profit business, there's an understandable limit to how much my clients are willing to humor my desire for unending data. 

Yeah, I'd expect it to be a global catastrophic risk rather than existential risk.

4
Denkenberger
4y
Some of the agricultural catastrophes that the solutions that the Alliance to Feed the Earth in Disasters (ALLFED) are working on address include super crop disease, bacterium that out competes beneficial bacteria, and super crop pest (animal), all of which could be related to genetic modification.

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

3
Thomas Kwa
4y
It's not on the 80k list of "other global issues", and doesn't come up on a quick search of Google or this forum, so I'd guess not. One reason might be that the scale isn't large enough-- it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.
Yeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021

Yeah, I was talking about the Q1 model when I was trying to puzzle out what your growth model was.

There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?

A lot of the value of potential growth vectors of a business come in the tails. For this particular forecast it doesn't real... (read more)

1
amandango
4y
I just eyeballed the worst to best case for each revenue source (and based on general intuitions about e.g. how hard it is to start a podcast). Yeah, this makes a lot of sense – we've thought about showing expected value in the past so this is a nice +1 to that.

Thanks, this was great!

The estimates seem fair, Honestly, much better than I would expect given the limited info you had, and the assumptions you made (the biggest one that's off is that I don't have any plans to only market to EAs).

Since I know our market is much larger, I use a different forecasting methodology internally which looks at potential marketing channels and growth rates.

I didn't really understand how you were working in growth rate into your calculations in the spreadsheet, maybe just eyeballing what made sense based on the ... (read more)

1
amandango
4y
Yeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021 – I just did 10% growth each quarter and summed that for 2021, and it looked reasonable given the EA TAM. This was a bit of a 'number out of the air,' and in reality I wouldn't expect it to be the same growth rate across all quarters. Definitely makes sense that you're not just focusing on the EA market – the market for general productivity services in the US is quite large! I looked briefly at the subscriptions for top productivity podcasts on Castbox (e.g. Getting Things Done, 5am miracle), which suggests lots of room for growth (although I imagine podcast success is fairly power law distributed). There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?

Hey, I run a business teaching people how to overcome procrastination (procrastinationplaybook.net is our not yet fully fleshed out web presence).

I ran a pilot program that made roughly $8,000 in revenue by charging 10 people for a premium interactive course. Most of these users came from a couple of webinars that my friend's hosted, a couple came from finding my website through the CFAR mailing list and webinars I hosted for my twitter friends.

The course is ending soon, and I'll spend a couple of months working on marketing and updating the co... (read more)

1
amandango
4y
Here’s my Q1 2021 prediction, with more detailed notes in a spreadsheet here. I started out estimating the size of the market, to get reference points. Based on very rough estimates of CEA subscriptions, # of people Effective Altruism Coaching has worked with, and # of people who have gone through a CFAR workshop, I estimated the number of EAs who are interested enough in productivity to pay for a service to be ~8000. The low number of people who have done Effective Altruism Coaching (I estimated 100, but this is an important assumption that could be wrong since I don’t think Lynette has published this number anywhere) suggests a range for your course (which is more expensive) of ~10 to 45 people in Q1. Some other estimates, which are in the spreadsheet linked above, gave me a range of $8,000 to $42,000. I didn’t have enough time to properly look into 2021 as a whole, so I just did a flat 10% growth rate across all the numbers and got this prediction. Interestingly, I notice a pressure to err on the side of optimistic when publicly evaluating people’s companies/initiatives. Your detailed notes were very helpful in this. I noticed that I wanted more information on: * The feedback you got from the first course. How many of them would do it again or recommend it to a friend? * More detail on your podcast plans. I didn’t fully understand the $10 lessons – I assumed it was optional $10 lessons attached to each podcast, but this may be wrong. * How much you’re focusing on EA’s. The total market for productivity services is a lot bigger (here’s an estimate of $1B market value for life coaching, which encompasses productivity coaching). Do these estimates align with what you're currently thinking? Are there any key assumptions I made that you disagree with? (here are blank distributions for Q1 and 2021 if you want to share what you're currently projecting).

I recommend Made to Stick by Chip and Dan Heath.

Going through several startup weekends showed me what works and what doesn't when trying to de-risk new projects.

This is great! Was trying to think through some of my own projects with this framework, and I realized I think there's half of the equation missing, related to the memetic qualities of the tool.

1. How "symmetric" is the thing I'm trying to spread? How easy is it to use for a benevolent purpose compared to a malevolent one?

2. How memetic is the idea? How likely is it to spread from a benevolent actor to a malevolent one.

3. How contained is the group with which I'm sharing? Outside of the memetic factors of the idea itself, is the person or group I'm sharing with it likely to spread it, or keep it contained.

2
MichaelA
4y
(My opinions, not necessarily Convergence's, as with most of my comments) Glad to hear you liked the post :) One thing your comment makes me think of is that we actually also wrote a post focused on "memetic downside risks", which you might find interesting. To more directly address your points: I'd say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesn't highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/heuristic instead or in addition, even if BIP could capture the relevant considerations. As an example, here's a sketch of how I think BIP could capture your points: 1. If the idea you're spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the "intelligence" or "power" of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would "help an actor make and execute plans that are aligned with the actor’s moral beliefs or values", and power in relation to what would "help an actor execute its plans". Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/or power. 2. If an idea increases the intelligence or power of whoever receives it, it's best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then it's harder to target it, and it's more likely you'll also increase the intelligence or power of malevolent actors, which is risky/negative. This could explain why a more "memetically fit" idea could be more risky to spread. 3. Similar to point 2. But with the addition of the observation that, if it'd be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they don't care about the right consequences) or less intellig

This is great!

I'd love to be able to provide an alternative model that can work as well, based on Saras Sarasvathy's work on Effectuation.

In the effectuation model (which came from looking at the process of expert entrepreneuers), you don't start with a project idea up front. Instead, you start with your resources, and the project evolves based on demand at any given time. I think this model is especially good for independent projects, where much of the goal is to get credibility, resources, and experience.

Instead of starting with the goal,... (read more)

I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.

However, it's important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.

Some relevant articles:

https://forum.effectivea... (read more)

1
Linda Linsefors
4y
I'm ok with hit based impact. I just disagree about events. I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served. Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement. But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation. But there are other types of event that are more hit based, and I notice that I'm less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at. Thanks for providing the links, I should read them. (Of course everything relating to X-risk is all or nothing in therms of impact, but we can't measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone's time.

But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1/50,000 that has ridiculous outsized impact - It's high expected value even if comparatively all the other events have low impact. And again, that's comparatively. Compared to say, mo... (read more)

Nope, 1/50,000 seems like a realistic ratio for very high impact events to normal impact events.

Would you say that events are low impact?

I think most events will be comparatively low impact compared to the highest impact events. Let's say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.

Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:

1. Pay all of the eve... (read more)

1
Linda Linsefors
4y
Whait what? 100 000 AI Safety Events? Like 100 000 individual events? There is a typo here right?
Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?

I'm not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?

I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1/1,000,000 of the people who start such projects can have high impact?

7
Linda Linsefors
4y
I'm unsure what size you have in mind when you say small. I don't think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters). I also don't think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things. If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success. I talk about success rather than impact, because for most project, you'll never know the actual impact. By "success" I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?) I'd say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding? Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?
For an impact purchase the amount of money is decided based on how good impact of the project was

I'm curious about how exactly this would work. My prior is that impact is clustered at the tails.

This means that there will frequently be small impact projects, and very occasionally be large impact projects - My guess is that if you want to be able to incentivize the frequent small
impact projects at all, you won't be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of the... (read more)

1
Linda Linsefors
4y
Lest assume for now that impact is clustered as the tails. (I don't have a strong prior, but this at least don't seem implausible to me) Then how would you like to spend funding? Since there will limited amour of money, what is your motivation for giving the low impact projects anything at all? Is it to support the people involved to do keep working, and eventually learn and/or get lucky enough to do something really important?

Perhaps Dereke Bruce had the right of it here:

"In order to keep a true perspective of one's importance, everyone should have a dog that will worship him and a cat that will ignore him."

Answer by HalffullApr 01, 202028
0
0

I propose that the best thing we can do for the long term future is to create positive flow-through effects now. Specifically, if we increase people's overall sense of well-being and altruistic tendencies, this will lead to more altruistic policies and organizations, which will lead to a better future.

Therefore, I propose a new top EA cause for 2020: Distributing Puppies

  • Puppies decrease individual loneliness, allowing a more global worldview.
  • Puppies model unconditional love and altruism, creating a flowthrough to their owners.
  • Puppies with good owners
... (read more)

I discussed this with my wife, who thinks that the broad idea is reasonable, but that kittens are a better choice than puppies:

  • As puppies are not role models their unconditional love is less relevant than first appears.
  • Most EA causes involve helping agents who aren't directly in contact with you.
  • If people learn their altruism from helping puppies, they will learn to expect gratitude, worse still, they will learn to expect gratitude even for relatively minor help!
  • Cats care about you only to the extent that they can receive food and/or pets. This is a much better model.
  • They also have toebeans.

Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."

There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.

This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that c... (read more)

Curious about what you think is weird in the framing?

The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.

Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.

No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.

the post focuses on human welfare,

It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

I'm also very interested in how increased economic growth impacts existential risk.

At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

2
Matthew_Barnett
4y
If this is true, is there a post that expands on this argument, or is it something left implicit? I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.

Let's say you believe two things:

1. Growth will have flowthrough effects on existential risk.

2. You have a comparative advantage effecting growth over x-risk.

You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.

1
Matthew_Barnett
4y
This makes sense as an assumption, but the post itself didn't argue for this thesis at all. If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making. Can you go more into detail? I'm also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.

I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.

1
EdoArad
4y
No, the analysis does not seem to contain what I was going for.  Curious about what you think is weird in the framing?

It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.

This argument has the same problem as recommending people don't wear masks though, if you go from "save lives save lives don't worry about economic impacts" to "worry about economics impacts it's as important as quarantine" you lose credibility.

You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.

This was the source of my "two curves" narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.

1
EdoArad
4y
This is not quite what I was going for, even though it is relevant. This problem profile focuses on existing institutions and on methods for collective decision making. I was thinking more in the spirit of market design, where the goal is to generate new institutions with new structures and rules so that people are selfishly incentivised to act in a way which maximizes welfare (or something else).

Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.

Was thinking about finding a simple good enough correlation between economic depression and death, then creating a "flattening the curve" graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.

On the other hand, I think it's quite plausible that this particular problem will ... (read more)

5
Halffull
4y
It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.
4
Greg_Colbourn
4y
Maybe also that the talk of preventing a depression is an information hazard when we are at the stage of the pandemic where all-out lockdown is the biggest priority for most of the richest countries. In a few weeks when the epidemics in the US and Western Europe are under control, and lockdown can be eased with massive testing, tracing and isolating of cases, then it would make more sense to freely talk about boosting the economy again (in the mean time, we should be calling for governments to take up the slack with stimulus packages. Which they seem to be doing already).

I think this is actually quite a complex question. I think it's clear that there's always a chance of value drift, so you can never put the chance of "giving up" at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many ... (read more)

such as consistency and justification effects

And selection effects!

2
HaydenW
4y
Definitely! I simplified it a lot in the post. Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things might be true of a lot of people. And so will the third thing, effectively, if your income doesn't increase by more than 2-3x. My guess is that the main reason for that is that more devoted people tend to pledge higher amounts. I think if you took some of those 10%ers and somehow made them choose to switch to 50%, they'd be far more likely than before to give up. But yeah, it's not entirely clear that P(giving up) increases with amount donated, or that either causally affects the other. I'm just going by intuition on that.

I've had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I've been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.

I would appreciate if people could link me to other sources that are important. I'm especially interested in people making arguments fo... (read more)

3
Aaron Gertler
4y
Kelsey Piper's "On 'Fringe' Ideas" makes a pro-risk argument in a certain sense (that we should be kind and tolerant to people whose ideas seem strange and wasteful). I'm not sure if this is written up anywhere, but one simple argument you can make is that many current EA projects were risky when they were started. GiveWell featured two co-founders with no formal experience in global health evaluating global health charities, and nearly collapsed in scandal within its first year. 80,000 Hours took on an impossibly broad task with a small staff (I don't know whether any had formal career advisement experience). And yet, despite various setbacks, both projects wound up prospering, without doing permanent damage to the EA brand (maybe a few scrapes in the case of 80K x Earning to Give, but that seems more about where the media's attention was directed than what 80K really believed).
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc.

I'm curious about the intuitions behind this. I think developing countries with fast growth have historically had quite high pollution and carbon output. I also think that more countries joining the "developed" category could quite possibly ... (read more)

I'm quite excited to see an impassioned case for more of a focus on systemic change in EA.

I used to be quite excited about interventions targeting growth or innovation, but I've recently been more worried about accelerating technological risks. Specific things that I expect accelerated growth to effect negatively include:

  • Climate Change
  • AGI Risk
  • Nuclear and Biological Weapons Research
  • Cheaper weapons in general

Curious about your thoughts on the potential harm that could come if the growth interventions are indeed successful.

[anonymous]4y24
0
0

I do think this is a concern that we need to consider carefully. On the standard FHI/Open Phil view of ex risk, AI and bio account for most of the ex risk we face this century. I find it difficult to see how increasing economic development LMICs could affect AI risk. China's massive growth is something of a special case on the AI risk front I think.

I think growth probably reduces biorisk by increasing the capacity of health systems in poor countries. It seems that leading edge bioscience research is most likely to happen in advanced economies.

On cli... (read more)

9
Michael_Wiebe
4y
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. In contrast, 'frontier' growth in developed countries is based on technological innovation, and is potentially more dangerous.

This work is excellent and highly important.

I would love to see this same setup experimented with for Grant giving.

Found elsewhere on the thread, a list of weird beliefs that Buck holds: http://shlegeris.com/2018/10/23/weirdest

I'd be curious about your own view on unquantifiable interventions, rather than just the Steelman of this particular view.

1
Davidmanheim
4y
As I said in the epistemic status, I'm far less certain than I once was, and on the whole I'm now skeptical. As I said in the post and earlier comments, I still think there are places where unquantifiable interventions are very valuable, I just think that unless it's obvious that they will be (see: Diamond Law of Evaluation,) I'd claim that quantifiably effective interventions are in expectation better.
Load more