why do you think that the worldviews need strong philosophical justification? it seems like this may leave out the vast majority of worldviews.
I think thoughtleader sometimes means "has thoughts at the leading edge" and sometimes mean "leads the thoughts of the herd on a subject" and that there is sometimes a deliberate ambiguity between the two.
one values humans 10-100x as much
This seems quite low, at least from a perspective of revelead preferences. If one indeed rejects unitarism, I suspect that the actual willingness to pay is something like 1000x - 10,000x to prevent the death of an animal vs. a human.
Revealed preference is a good way to get a handle on what people value, but its normative foundation is strongest when the tradeoff is internal to people. Eg when we value lives vs income, we would want to use people's revealed preference for how they trade those off because those people are the most affected by our decisions and we want to incorporate their preferences. That normative foundation doesn't really apply to animal welfare where the trade-offs are between people and animals. You may as well use animals revealed preferences for saving humans (ie not at all) and conclude that humans have no worth; it would be nonsensical.
Also, if we defer to people's revealed preferences, we should dramatically discount the lives and welfare of foreigners. I'd guess that Open Philanthropy, being American-funded, would need to reallocate much or most of its global health and development grantmaking to American-focused work, or to global catastrophic risks.
EDIT: For those interested, there's some literature on valuing foreign lives, e.g. https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q="valuing+foreign+lives"+OR+"foreign+life+valuation"
But isn't the relevant harm here animal suffering rather than animal death? It would seem pretty awful to prefer that an animal suffer torturous agony rather than a human suffer a mild (1000x less bad) papercut.
I think that's basically right, but also rejecting unitarianism and discounting other animals through this seems to me like saying the interests of some humans matter less in themselves (ignoring instrumental reasons) just because of their race, gender or intelligence, which is very objectionable.
People discount other animals because they're speciesist in this way, although also for instrumental reasons.
"To what extent is money important to you?" and found that was much more important than money itself: money has a much bigger effect on happiness if you *think* money is important (a
Or perhaps, you think money is important if it has a bigger effect on your happiness (based on e.g. environmental factors and genetic predispostion)? In other words, maybe these people are making correct predictions about how they work, rather than creating self-fulfilling prophecies? It is at least worth considering that the causality goes this way.
...AND it found people wh
I think it's also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.
It's possible that EA has shaved a couple counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn't working on it.
Can you say more about which longtermist efforts you're referring to?
I think a case can be made, but I don't think it's an easy (or clear) case.
My current impression is that Yudkowsky & Bostrom's writings about AGI inspired the creation of OpenAI/DeepMind. And I believe FTX invested a lot in Anthropic and OP invested a little bit (in relative terms) into OpenAI. Since then, there have been capabilities advances and safety advances made by EAs, and I don't think it's particularly clear which outweighs.
It seems unclear to me what the sign of these effect...
If you're going to have a meeting this short, isn't it better to e.g. send a message or email about this? Having very short conversations like this means you've wasted a large slot of time on your EAG calendar that you could have used for different types of conversations that you can only do in person at EAG.
It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.
I recently gave a talk on one of my own ambitious projects at my organization, and gave the following outside view outcomes in order of likelihood.
In general, I'd say that outside view this is the most likely order of outcomes of any ambitious/world-saving project. And I was saying it specifically to elic...
If it's an EA project and you need support, I'd apply to EA Funds, and tell FTX that you're interested and say you're still seeking funding. Even if they have the money, they also aren't throwing cash at anything that moves - and FTX isn't the best placed group to evaluate EA projects. And I'd note that EA funds also aren't particularly funding constrained - but if they were, it would make more sense for FTX to give them money instead of trying to evaluate projects and fund people directly.
I think the “Already working on EA jobs / projects that can be done from the Bahamas” is the answer here. To my read, this isn’t trying to fully fund someone’s work, but rather to incentivize someone to do the work from the Bahamas . If you were self-funding a project from savings, this doesn’t suddenly provide you a full salary, but it still probably looks very good as it could potentially eliminate your cash burn.
This is great! Curious what (if anything) you're doing to measure counterfactual impact. Any sort of randomized trial involving e.g. following up with clients you didn't have the time to take on and measuring their change in productive hours compared to your clients?
Yeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021
Yeah, I was talking about the Q1 model when I was trying to puzzle out what your growth model was.
There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?
A lot of the value of potential growth vectors of a business come in the tails. For this particular forecast it doesn't real...
Thanks, this was great!
The estimates seem fair, Honestly, much better than I would expect given the limited info you had, and the assumptions you made (the biggest one that's off is that I don't have any plans to only market to EAs).
Since I know our market is much larger, I use a different forecasting methodology internally which looks at potential marketing channels and growth rates.
I didn't really understand how you were working in growth rate into your calculations in the spreadsheet, maybe just eyeballing what made sense based on the ...
Hey, I run a business teaching people how to overcome procrastination (procrastinationplaybook.net is our not yet fully fleshed out web presence).
I ran a pilot program that made roughly $8,000 in revenue by charging 10 people for a premium interactive course. Most of these users came from a couple of webinars that my friend's hosted, a couple came from finding my website through the CFAR mailing list and webinars I hosted for my twitter friends.
The course is ending soon, and I'll spend a couple of months working on marketing and updating the co...
Going through several startup weekends showed me what works and what doesn't when trying to de-risk new projects.
This is great! Was trying to think through some of my own projects with this framework, and I realized I think there's half of the equation missing, related to the memetic qualities of the tool.
1. How "symmetric" is the thing I'm trying to spread? How easy is it to use for a benevolent purpose compared to a malevolent one?
2. How memetic is the idea? How likely is it to spread from a benevolent actor to a malevolent one.
3. How contained is the group with which I'm sharing? Outside of the memetic factors of the idea itself, is the person or group I'm sharing with it likely to spread it, or keep it contained.
Here's Raymond Arnold on this strategy:
https://www.lesswrong.com/posts/LxrpCKQPbdpSsitBy/short-circuiting-demon-threads-working-example
This is great!
I'd love to be able to provide an alternative model that can work as well, based on Saras Sarasvathy's work on Effectuation.
In the effectuation model (which came from looking at the process of expert entrepreneuers), you don't start with a project idea up front. Instead, you start with your resources, and the project evolves based on demand at any given time. I think this model is especially good for independent projects, where much of the goal is to get credibility, resources, and experience.
Instead of starting with the goal,...
I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.
However, it's important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.
Some relevant articles:
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone's time.
But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1/50,000 that has ridiculous outsized impact - It's high expected value even if comparatively all the other events have low impact. And again, that's comparatively. Compared to say, mo...
Would you say that events are low impact?
I think most events will be comparatively low impact compared to the highest impact events. Let's say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.
Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:
1. Pay all of the eve...
Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?
I'm not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?
I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1/1,000,000 of the people who start such projects can have high impact?
For an impact purchase the amount of money is decided based on how good impact of the project was
I'm curious about how exactly this would work. My prior is that impact is clustered at the tails.
This means that there will frequently be small impact projects, and very occasionally be large impact projects - My guess is that if you want to be able to incentivize the frequent small
impact projects at all, you won't be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of the...
Perhaps Dereke Bruce had the right of it here:
"In order to keep a true perspective of one's importance, everyone should have a dog that will worship him and a cat that will ignore him."
I propose that the best thing we can do for the long term future is to create positive flow-through effects now. Specifically, if we increase people's overall sense of well-being and altruistic tendencies, this will lead to more altruistic policies and organizations, which will lead to a better future.
Therefore, I propose a new top EA cause for 2020: Distributing Puppies
I discussed this with my wife, who thinks that the broad idea is reasonable, but that kittens are a better choice than puppies:
You might be interested in this same question that was asked last June:
Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."
There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.
This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that c...
Curious about what you think is weird in the framing?
The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.
Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.
No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.
the post focuses on human welfare,
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
I'm also very interested in how increased economic growth impacts existential risk.
At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).
I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."
Let's say you believe two things:
1. Growth will have flowthrough effects on existential risk.
2. You have a comparative advantage effecting growth over x-risk.
You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.
I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.
It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.
This argument has the same problem as recommending people don't wear masks though, if you go from "save lives save lives don't worry about economic impacts" to "worry about economics impacts it's as important as quarantine" you lose credibility.
You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.
This was the source of my "two curves" narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.
Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.
Was thinking about finding a simple good enough correlation between economic depression and death, then creating a "flattening the curve" graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.
On the other hand, I think it's quite plausible that this particular problem will ...
I think this is actually quite a complex question. I think it's clear that there's always a chance of value drift, so you can never put the chance of "giving up" at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.
If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many ...
I've had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I've been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.
I would appreciate if people could link me to other sources that are important. I'm especially interested in people making arguments fo...
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc.
I'm curious about the intuitions behind this. I think developing countries with fast growth have historically had quite high pollution and carbon output. I also think that more countries joining the "developed" category could quite possibly ...
I'm quite excited to see an impassioned case for more of a focus on systemic change in EA.
I used to be quite excited about interventions targeting growth or innovation, but I've recently been more worried about accelerating technological risks. Specific things that I expect accelerated growth to effect negatively include:
Curious about your thoughts on the potential harm that could come if the growth interventions are indeed successful.
I do think this is a concern that we need to consider carefully. On the standard FHI/Open Phil view of ex risk, AI and bio account for most of the ex risk we face this century. I find it difficult to see how increasing economic development LMICs could affect AI risk. China's massive growth is something of a special case on the AI risk front I think.
I think growth probably reduces biorisk by increasing the capacity of health systems in poor countries. It seems that leading edge bioscience research is most likely to happen in advanced economies.
On cli...
This work is excellent and highly important.
I would love to see this same setup experimented with for Grant giving.
Found elsewhere on the thread, a list of weird beliefs that Buck holds: http://shlegeris.com/2018/10/23/weirdest
I'd be curious about your own view on unquantifiable interventions, rather than just the Steelman of this particular view.
This just seems like you're taking on one specific worldview and holding every other worldview up to it to see how it compares.
Of course this is an inherent problem with worldview diversification, how to define what counts as a worldview and how to choose between them.
But still intuitively if your meta-wolrdview screens out the vast majority of real life views that seems undesirable. The meta-worldview that coherency matters is impotant but should be balanced with other meta worldviews, such as that what matters is how many people hold a worldview, or how much harmony it creates