All of JoshYou's Comments + Replies

Toby Ord's existential risk estimates in The Precipice were for risk this century (by 2100) IIRC. That book was very influential in x-risk circles around the time it came out, so I have a vague sense that people were accepting his framing and giving their own numbers, though I'm not sure quite how common that was. But these days most people talking about p(doom) probably haven't read The Precipice, given how mainstream that phrase has become.

Also, in some classic hard-takeoff + decisive-strategic-advantage scenarios, p(doom) in the few years after AGI woul... (read more)

3
Isaac King
3mo
Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like "current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).
Answer by JoshYouDec 28, 202210
0
0

I'm probably "on the clock" about 45 hours per week - I try to do about 8 hours a day but I go over more often than not. But maybe only about 25-35 hours of that is focused work, using a relatively loose sense of "focused" (not doing something blatantly non-work, like reading Twitter or walking around outside). I think my work output is constrained by energy levels, not clock time, so I don't really worry about working longer hours or trying to stay more focused, but I do try to optimize work tasks and non-work errands to reduce their mental burdens.

Thanks for writing this, I think it's important that people at least understand the basics. EA blogs used to contain much more personal finance advice. In the past I've wondered whether EAs who joined more recently were less likely to know about personal finance as a result.

0
NicoleJaneway
1y
Neat — thanks for the links, Josh

I think you're overestimating how high EA-org-salary spending is compared to (remaining) total EA funding per year (in the neighborhood of 10%?)

1
trevor1
1y
That's notable, but it doesn't change the conclusion. Either cuts come out of that 10% or they don't, and if they do then the rationale is still very strong for lowering salaries instead of exclusively cutting staff.
Answer by JoshYouOct 27, 20229
1
1

I think the benefits of living in a hub city (SF, NYC, Boston, or DC) are very large and are well worth the higher costs, assuming it's financially feasible at all, especially if you currently have no personal network in any city. You'll have easy access to interesting and like-minded people, which will have many many diffuse impact and personal benefits.

Also, those are probably the only American cities besides maybe Chicago and Philly where's it is easy to live without a car (and arguably it's only NYC). 

1
Joseph Lemien
1y
Assuming that it's financially feasible to live in any of these four cities (San Francisco/Berkley, New York, Boston, or Washington DC), how would you prioritize them? Any reasons a person should choose one over another?
5
Linch
1y
Note that Berkeley has significantly more EAs than SF, for people moving to the Bay Area.

I loved this Wikitravel article about American culture for this same reason.

2
ClaireZabel
1y
This is also great, thank you!
Answer by JoshYouSep 29, 20221
0
0

What makes someone good at AI safety work? How does he get feedback on whether his work is useful, makes sense, etc?

2
James Odene [User-Friendly]
2y
haha, thanks for sharing. This did make me laugh.

For the big-buck EtGers, what sort of donation percentages is this advice assuming? I imagine that if you're making $1M and even considering direct work then you're giving >>10% (>50%?) but I'm not sure.

I was interpreting the question to mean that you would donate $1M (and the amount you are making but not donating is left unspecified).

9
Yonatan Cale
2y
I was trying not to assume a percentage, but I wrote this post with ~90-100% donations in mind. Regardless, I think the action that makes sense is to ask the org

I also actually have no idea how people do this, curious to see answers!

Also, the questions seem to assume that grantees don't have another (permanent, if not full-time) job. I'm not sure how common that is.

Melatonin supplements can increase the vividness of dreams, which seems counterproductive here. But maybe there is a drug with the opposite effect?

5
Julia_Wise
2y
The Astral Codex Ten post linked above covers this - for some people, nightmares and blood pressure seem to be connected, and prazosin (a blood pressure medication) can reduce nightmares. The problem being that sometimes it lowers blood pressure too much and makes people dizzy.
Answer by JoshYouJul 06, 20229
0
0

The margin/marginal value.  

Anyone trying to think about how to do the most good will be very quickly and deeply confused if they aren't thinking at the margin. E.g. "if everyone buys bednets, what happens to the economy?" 

It might help to put some rough numbers on this. Most of the EA org non-technical job postings that I have seen recently have been in the $60-120k/year range or so. I don't think those are too high, even at the higher end of that range. But value alignment concerns (and maybe PR and other reasons) seem like a good reason to not offer, say, 300k or more for non-executive and non-technical roles at EA orgs.

I think EA orgs generally pay higher salaries than other non-profits, but below-market for the EA labor market (many of whom have software, consulting, etc as alternatives). I don't think they're anywhere close to "impact value" based on anecdotal reports of how much EA orgs value labor. I believe 80k did a survey on this (Edit: it's here). 

3
Cynwit
2y
I agree having these roles filled is still very valuable for the world and would continue to be so at higher wages. My worry is from seeing what candidates next best alternative option is for other jobs. I worry that EA jobs are too good a deal e.g better benefits, better salary and more impact. When one or a couple of those would be enough to motivate someone into that job. As you mention this won't be as true for some types of roles such as operations of computer sciences roles where transfering the higher paid 'normal' jobs is easier. I don't know wether nonprofit employees deserve more is a relevant question. As that's more subjective and comes at the cost of the organisations beneficiaries (if deservingness is the goal)

Adding on, paying higher wages than most nonprofits is a good thing. Most nonprofit employees are underpaid even though they arguably deserve to be paid more than their private-sector counterparts.

Fundraising is particularly effective in open primaries, such as this one. From the linked article:

But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.

Basically, said Darrell West, vice president and director of governance studi

... (read more)

Although early fundraising could be correlational with success rather than causal, if it's an indicator of who can generate support from the electorate.

(I'd be pretty confident there's an effect like this but don't know how strong, and haven't tried to understand if the article you're quoting from tries to correct for it.)

Note that large funders such as SBF can and do support political candidates with large donations via PACs, which can advertise on behalf of a candidate but are not allowed to coordinate with them directly. But direct donations are probably substantially more cost-effective than PAC money because campaigns have more options on how to spend the money (door-knocking, events, etc not just ads) and it would look bad if a candidate was exclusively supported by PACs.

3
KevinWei
2y
The optics concern makes sense to me, but I'm 90% certain PACs and Super PACs can and do spend on things that are not ads? Eg paid canvassers/phone bankers, polling, mailers, etc. Additionally (and I'm not advocating for this), there seem to be many ways to get around the coordination ban, e.g.: https://www.nytimes.com/2020/02/18/us/politics/buttigieg-votevets-super-pac.html
Answer by JoshYouFeb 20, 20228
0
0

If you're not planning to go to grad school (and maybe even if you are), getting straight As in college probably means a lot of unnecessary effort.

1
SomePerson
2y
Assuming you don't plan to go to gradschool. Then, do you think that the highest impact jobs you could apply to probably don't require stellar grades? Or that they don't increase your chances of getting the job a lot?  That seems like  a crux to me  
Answer by JoshYouDec 16, 202110
0
0

I gave most of my donations to the EA Funds Donor Lottery because I felt pretty uncertain about where to give. I am still undecided on which cause to prioritize, but I have become fairly concerned about existential risk from AI and I don't think I know enough about the donation opportunities in that space. If I won the lottery, I would then take some more time to research and think about this decision.

I also donated to Wild Animal Initiative and Rethink Priorities because I still want to keep a regular habit of making donation decisions. I think they are t... (read more)

I did Metaculus for a while but I wasn't quite sure how to assess how well I was doing and I lost interest. I know Brier score isn't the greatest metric. Just try to accumulate points?

What does "consequentialist" mean in this context?

8
RobBensinger
2y
'Means-ends reasoning', 'selecting policies on the basis of their predicted consequences', etc. Discussed more in consequentialist cognition, and not to be confused with the consequentialist theory of moral value.

A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you've hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?

So there are a bunch of questions in this, but I can answer some of the ops related one:

  • We haven't had ops talent bottlenecks. We've had incredibly competitive operations hiring rounds (e.g. in our most recent hiring round, ~200 applications, of which ~150 were qualified at least on paper), and I'd guess that 80%+ of our finalists are at least familiar with EA (which I don't  think is a necessary requirement, but the explanation isn't that we are recruiting from a different pool I guess).
    • Maybe there was a bigger bottleneck in ~2018 and EA has grown a
... (read more)
6
MichaelA
2y
This comment sounds like it's partly implying "RP seems to have recently overcome these bottlenecks. How? Does that imply the bottlenecks are in general smaller now than they were then?" I think the situation is more like "The bottlenecks were there back then and still are now. RP was doing unusually well at overcoming the bottlenecks then and still is now." The rest of this comment says a bit more on that front, but doesn't really directly answer your question. I do have some thoughts that are more like direct answers, but other people at RP are better placed to comment so I'll wait till they do so and then maybe add a couple things.  (Note that I focus mostly on longtermism and EA meta; maybe I'd say different things if I focused more on other cause areas.) ---------------------------------------- In late 2020, I was given three quite exciting job offers, and ultimately chose to go with a combo of the offer from RP and the offer from FHI, with Plan A being to then leave FHI after ~1 year to be a full-time RP employee. (I was upfront with everyone about this plan. I can explain the reasoning more if people are interested.) The single biggest reason I prioritised RP was that I believe the following three things: 1. "EA indeed seems most constrained by things like 'management capacity' and 'org capacity' (see e.g. the various things linked to from scalably using labor). 2. I seem well-suited to eventually helping address that via things like doing research management. 3. RP seems unusually good at bypassing these bottlenecks and scaling fairly rapidly while maintaining high quality standards, and I could help it continue to do so." I continue to think that those things were true then and still are now (and so still have the same Plan A & turn down other exciting opportunities).  That said, the picture regarding the bottlenecks is a bit complicated. In brief, I think that:  * The EA community overall has made more progress than I expected at increasing th

Longtermism isn't just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.

I wonder if it's a good or bad thing that AI alignment (of existing algorithms) is increasingly being framed as a social justice issue, once you've talked about algorithmic bias it seems less privileged to then  say "I'm very concerned about a future in which AI is given even more power".

How's having two executive directors going?

8
Peter Wildeford
3y
I also think having a co-Executive Director is great. As Marcus said, we complement each other very well -- Marcus is more meticulous and detail-oriented than me, whereas I tend to be more "visionary". I definitely think we need both. We also share responsibilities and handle disagreements very well, and we have a trusted tie-breaking system. We've thought a few times about whether this merits splitting into CEO / COO or something similar and it hasn't ever made as much sense as our current system.
9
Marcus_A_Davis
3y
I think it's going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization. Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).

How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?

8
Marcus_A_Davis
3y
Hey Josh, thanks for the question! From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered. At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.

My description was based on Buck's correction (I don't have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don't believe. I don't mean to imply anything stronger than what Buck claimed about Leverage.

I invoked white nationalists not as a hypothetical representative of ideologies I don't like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile gr... (read more)

Just to be clear, I don't think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like "advocating for nations based on values historically associated with whiteness", which would obviously include neoreaction, but would also presumably be a much more tenable position in ... (read more)

I also agree that it's ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I'm not talking about conservatives, or the "IDW", or people who don't like the BLM movement or think racism is no big deal. I'd be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett's leaked emails. I don't think his defense is credible at all, by the way).

I don't think it's accurate to see white natio... (read more)

We've already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don't moderate away or at least discourage such views will tend to attract them - it's not the pattern of activity you'd see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with t... (read more)

2
abrahamrowe
4y
I just upvoted this comment as I strongly agree with it, but also, it had  -1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.

I don't know anything about Leverage but I can think of another situation where someone involved in the rationalist community was exposed as having misogynistic and white supremacist anonymous online accounts. (They only had loose ties to the rationalist community, it came up another way, but it concerned me.)

From what I understand, since Three Gorges is a gravity dam, meaning it uses the weight of the dam to hold back water rather than its tensile strength, a failure or collapse would not necessarily be catastrophic one. So if some portion falls, the rest will stay standing. That means there's a distribution of severity within failures/collapses, it's not just a binary outcome.

To me it feels easier to participate in discussions on Twitter than on (e.g.) the EA Forum, even though you're allowed to post a forum comment with fewer than 280 characters. This makes me a little worried that people feel intimidated about offering "quick takes" here because most comments are pretty long. I think people should feel free to offer feedback more detailed than an upvote/downvote without investing a lot of time in a long comment.

4
Aaron Gertler
4y
I agree! I set aside a big chunk of my recent EAGx presentation about posting on the Forum to discussing Shortform posts, which exist largely to encourage brief content/"quick takes".
5
aogara
4y
Agreed, and I support EA Forum norms of valuing quick takes in both posts and comments. Personally, the perceived bar to contributing feels way too high.

Not from the podcast but here's a talk Rob gave in 2015 about potential arguments against growing the EA community: https://www.youtube.com/watch?v=TH4_ikhAGz0

EAs are probably more likely than the general public to keep money they intend to donate invested in stocks, since that's a pretty common bit of financial advice floating around the community. So the large drop in stock prices in the past few weeks (and possible future drops) may affect EA giving more than giving as a whole.

How far do you think we are from completely filling the need for malaria nets, and what are the barriers left to achieving that goal?

6
RobM
4y
Not close. Money. There are significant gaps in funding for nets and our current information is that for the period 2021-2023 that gap will be around US$500m to US$750m.

What are your high-level goals for improving AI law and policy? And how do you think your work at OpenAI contributes to those goals?

4
Cullen
4y
My approach is generally to identify relevant bodies of law that will affect the relationships between AI developers and other relevant entities/actors, like: 1. other AI developers 2. governments 3. AI itself 4. Consumers Much of this is governed by well-developed areas of law, but in very unusual (hypothetical) cases. At OpenAI I look for edge cases in these areas. Specifically I collaborate with technical experts who are working on the cutting edge of AI R&D to identify these issues more clearly. OpenAI empowers me and the Policy team so that we can guide the org to proactively address these issues.

Seems like its mission sits somewhere between GiveWell's and Charity Navigator's. GiveWell studies a few charities to find the very highest impact ones according to its criteria. Charity Navigator attempts to rate every charity, but does so purely on procedural considerations like overhead. ImpactMatters is much broader and shallower than GiveWell but unlike Charity Navigator does try to tell you what actually happens as the result of your donation.

I think I would be more likely to share my donations this way compared to sharing them myself, because it would feel easier and less braggadocious (I currently do not really advertise my donations).

Among other things, I feel a sense of pride and accomplishment when I do good, the way I imagine that someone who cares about, say, the size of their house feels when they think about how big their house is.

Absolutely, EAs shouldn't be toxic, inaccurate, or uncharitable on Twitter or anywhere else. But I've seen a few examples of people effectively communicating about EA issues on Twitter, such as Julia Galef and Kelsey Piper, at a level of fidelity and niceness far above the average for that website. On the other hand they are briefer, more flippant, and spend more time responding to critics outside the community than they would on other platforms.

Yep, though I think it takes a while to learn how to tweet, whom to follow, and whom to tweet at before you can get a consistently good experience on Twitter and avoid the nastiness and misunderstandings it's infamous for.

There's a bit of an extended universe of Vox writers, economists, and "neoliberals" that are interested in EA and sometimes tweet about it, and I think it would be potentially valuable to add some people who are more knowledgeable about EA into the mix.

On point 4, I wonder if more EAs should use Twitter. There are certainly many options to do more "ruthless" communication there, and it might be a good way to spread and popularize ideas. In any case it's a pretty concrete example of where fidelity vs. popularity and niceness vs. aggressive promotion trade off.

Keep in mind that Twitter users are a non-representative sample of the population... Please don't accept kbog's proposed deal with the devil in order to become popular in Twitter's malign memetic ecosystem.

3
kbog
5y
I've recently started experimenting with that, I think it's good. And Twitter really is not as bad a website as people often think.

This all seems to assume that there is only one "observer" in the human mind, so that if you don't feel or perceive a process, then that process is not felt or perceived by anyone. Have you ruled out the possibility of sentient subroutines within human minds?

Hi JoshYou. Thanks for your very pertinent comment.

We are aware of the possibility of hidden qualia. It is a valuable hypothesis. Nevertheless, we found no empirical evidence to support it, at least in the literature on invertebrate sentience. If you will, you can view our project as a compilation and analysis of the existing evidence about the sentience of individual invertebrate organisms, as opposed to subroutines within those systems. Under this reading, what we call ‘unconscious processes’ would be understood as processes which are inaccessible to the... (read more)

Sadly, Jiwoon passed away last year.

3
Milan_Griffes
5y
Do you know if there's an obituary or memorial page somewhere?

Some links if you haven't seen them yet:

https://reducing-suffering.org/advanced-tips-on-personal-finance/

https://80000hours.org/2013/06/how-to-create-a-donor-advised-fund/

I don't use a DAF but I've considered it in the past. In my view, the chief advantage is that they allow you to claim the tax deduction when you deposit money into the DAF, before you actually make the donation. They're also exempt from capital gains taxes, although you can also avoid capital gains taxes by donating appreciated assets directly to the charity, but t... (read more)

Open Phil would be a good candidate for this, though that's a difficult proposition due to its sheer size. It is a somewhat odd situation that Open Phil throws huge amounts of money around, much of which happens without any comment from the EA community.

A lot of this is the private sensitivity many community members feel about publicly criticizing the Open Philanthropy Project. I'd chalk it up to the relative power Open Phil wields having complicated impacts on all our thinking on this subject, since with how little the EA community comments on it, the lack of public feedback Open Phil receives seems out of sync with the idea they are the sort of organization that would welcome it. Another thing is the quality of criticism and defense of grantmaking decisions on both sides is quite low. It seems to m... (read more)

I wonder if the lack of tax deductibility and the non-conventional fundraising platform (GoFundMe) nudge people into not donating or donating less than they would to a more respectable-seeming charity.

(As a tangent, there's a donation swap opportunity for the EA Hotel that most people are probably not aware of).

3
Greg_Colbourn
5y
We now also have a PayPal MoneyPool.
2
Milan_Griffes
5y
Whoa, didn't know about the donation swap!
2
Milan_Griffes
5y
Fwiw I think GoFundMe is a pretty mainstream fundraising vehicle.

Speaking as someone with a undergrad degree in math, I would have found a non-technical summary for this post to be helpful. So I expect this would apply much more to many other forum readers.

1
Greg_Colbourn
5y
Thanks for the feedback, we will incorporate a non-technical summary into Part 2. (Basically the whole thing is just an attempt to explicitly factorise the largely intuitive reasoning people might use in estimating the value of the project).

For one of the work tests I did for Open Phil, the instruction sheet specifically asked that the work test not be shared with anyone. That might have been intended as a temporary restriction, I'm not sure, but I'm not planning on sharing it unless I hear otherwise.

4
Jonas V
5y
I would ask Open Phil whether they'd be okay with you sharing it with the organization to your applying to (ideally only once you're past the first stage, and only if the other organization expressed interest).

Agreed. I don't see any "poor journalism" in any of the pieces mentioned. A few of them would be "poor intervention reports" if we chose to judge them by that standard.

5
kbog
5y
"Finding the best ways to do good" denotes intervention reporting.

It's clear that climate change has at best a small probability (well under 10%) of causing human extinction, but many proponents of working on other x-risks like nuclear war and AI safety would probably give low probabilities of human extinction for those risks as well. I think the positive feedback scenarios you mention (permafrost, wetlands, and ocean hydrates) deserve some attention from an x-risk perspective because they seem to be poorly understood, so the upper bound on how severe they might be may be very high. You cite one simulation that burn... (read more)

Load more