All of smountjoy's Comments + Replies

Oops, thank you! I thought I had selected linkpost, but maybe I unselected without noticing. Fixed!

2
Clifford
1y
Sorry I meant it as two separate things. 1. I'm not sure tech will help you fundraise more at work. I spoke to one traditional payroll-giving fundraiser and he raised more for charity in a day than I did in several months. His method was to go round each table in an office, pitch them for 5 mins on the tax benefits of signing up and ask them to sign up on a piece of paper to give to a charity close to their hearts.  2. I'm not sure EA will help you fundraise more at work. As in the above example, people are happy to give to charity regardless of the EA pitch. I think the EA pitch can help inspire some people and the fact that a chunk of people chose our recommended charities is encouraging but I don't think it's a gamechanger in the volume of donations.

Thanks! FWIW, I completely agree with your framing. In my head the question was about debate ("did FTX look sketchy enough that we should've seen big debates about it on the forum") and I should've made that explicit. Sounds like the majority answer so far is yes, it did look that bad. My impression is also the same as yours that those debates did not happen.

My (possibly wrong) understanding of what Eliezer is saying:

FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).

The failure in FTX was that they did not implement the right internal controls—not that the relationship was "hidden from investors and other stakeholders."

If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.

8
AnonymousRiskGuy
1y
I couldn't quite bottom out exactly what EY was saying, but I'm pretty sure it wasn't that. On your interpretation, EY said, "who EAs are fucking is none of [wider] EA's business [except people who are directly affected by the COI]". But he goes on to clarify "There are very limited exceptions to this rule like 'maybe don't fuck your direct report' ". If that's an exception to the rule of EAs fucking being only of interest to directly affected parties, then it mean EY thinks an EA having sex with a subordinate should be broadcast to the entire community. That's a very strict standard (although I guess not crazy - just odd that EY was presenting it as a more relaxed / less prurient standard than conventional financial risk management). It also doesn't address my core objection, which is that EA leadership failed very badly to implement proper financial risk management processes. Generally my point was that EA leadership should be epistemically humble now and just implement the risk management processes that work for banks, rather than tinkering around and introducing their own version of these systems. Regardless of what EY meant, unless he meant 'We should hire in PWC to implement the same financial controls as every Fortune company' then he is making exactly the same mistake EA leadership made with FTX - assuming that they could create better risk management from first principles than the mainstream system could from actual experience By the way, I disagree with the objective position here too. Every FTX investor needed to know about the COI and the management strategy FTX adopted in order to assess their risk exposure. This would be the standard at a conventional company (if the company knew about such a blatant COI from their CEO and didn't tell investors at a conventional company then their risk officers would potentially be liable for the fraud too, iirc)

Great idea!

Jump on a Zoom Call once a week with a carefully chosen peer for 1:1s and a group of 5-8 like-minded EAs with the same goal

Is this a group program, or one-on-one, or some of each? Is the "carefully chosen peer" matched with you for all 4–8 weeks?

What type or granularity of goal are you referring to?

4
Inga
2y
* There are different formats: just 1:1, just group, and a combination.  * "Carefully chosen peers":  The peers are matched at the beginning and stay within the same group. People can change their group, if they are not entirely comfortable in their current one. * Goal granularity:  We have categories of topics (e.g., self-esteem, perfectionism, productivity) but also ask for specific goals people have, i.e., what they would ideally achieve by attending. The type of support they prefer to get from peers will be taken into account as well (e.g. understanding, problem-solving, belonging).

Oops, thank you! Not sure what I was thinking. Fixed now.

Overall agreed, except that I'm not sure the idea of patient longtermism does anything to defend longtermism against Aron's criticism? By my reading of Aron's post, the assumptions there are that people in the future will have a lot of wealth to deal with problems of their time, compared to what we have now—which would make investing resources for the future (patient longtermism) less effective than spending them right away.

I think your point is broadly valid, Aron: if we knew that the future would get richer and more altruistically-minded as you describe,... (read more)

Wow, I'm glad I noticed Vegan Nutrition in among the winners. Many thanks to Elizabeth for writing, and I hope it will eventually appear as a post. A few months ago I spent some time looking around the forum for exactly this and gave up—in hindsight, I should've been asking why it didn't exist!

6
Elizabeth
2y
There is a full post planned, but I wanted actual data, which means running nutrition tests on the population I think is hurting, treating any deficiencies, and retesting.  I have a grant for this (thanks SFF!) but even getting the initial tests done is taking months so the real post is a very long ways out. PS. I have no more budget to pay for tests but if anyone wants to cover their own test ($613, slightly less if you already have or want to skip a genetic test) and contribute data I'd be delighted to have you. Please PM me for details.

I'm starting to think there's no possible question for which Will can't come up with an answer that's true, useful, and crowd-pleasing. We're lucky to have him!

1
astupple
2y
Thank you!

If it does not serve any useful purpose, then why focus on longtermism?

I think you're right that we can make a good case for increased spending on nuclear safety, pandemic preparedness, and AI safety without appeal to longtermism. But here's one useful purpose of longtermism: only the longtermist arguments suggest that those causes are overwhelmingly important; and because of the longtermist arguments, we have many talented people are working zealously to solve those issues—people who would otherwise be working on other things.

Obviously this doesn't address your concern that longtermism is incorrect; it's merely a reason why, if longtermism is correct, it's a useful thing to talk about.

Agreed. The first big barrier to putting self-modification into practice is "how do you do it"; the second big barrier is "how do you prove to others that you've done it." I'm not sure why the authors don't discuss these two issues more.

  • On how to actually self-modify/self-deceive, all they say is that it might involve "leaning into and/or refraining from over-riding common-sense moral intuitions". But that doesn't explain how to make the change irrevocably (which is the crucial step).
  • On how to demonstrate self-modification to others, they mention a "societ
... (read more)
2
Brad West
2y
Actual self-modification-it's similar to the problem with Pascal's wager: even if you can persuade yourself of the utility of believing proposition X, it is at best extremely difficult, and, at worst, impossible to make yourself believe it if your epistemological system leads you to a contrary belief. Counterfeiting deontological position-if the consequentialist basis for rejecting murder-for-organ-harvest is clear, you may nonetheless be able to convey a suitable outrage. Many of the naively repugnant utilitarian conclusions would actually be extraordinarily corrosive to our social fabric and could inspire similar emotional states. Consequentialists are no less emotional, caring, beings than deontologist (in fact we care more, because we don't subordinate well-being to other principles). Thus the consequentialist surgeon could be just as perturbed by such repugnant schemes because of the actual harm they would entail!

Thanks for writing! It sounds like part of your pitch is that there are some types of therapy which are much more effective than the types in common use. Scott's book review of all therapy books makes me pretty pessimistic about that. If you've read that post, do you have any thoughts?

1
Dvir Caspi
3y
I read now, well.. it's a pretty cynical post. While there are obviously those books that give you false magical hopes for instant relief, and it's fun to joke about them, I am not a fan of the cynical tone. Some people say cynicism is the opposite of hope, and I kinda agree. While it's good to criticize, Mental health and health in general are supposed to be  fields of hope. Obviously not false hope, but there are objective and subjective reasons for hope in treatment.  However, there are still some important points in the post which I am definitely noting down.  
1
Dvir Caspi
3y
Thank you, I will definitely definitely read. 

Hi Sarah! I broadly agree with the post, but I do think there's a marginal value argument against becoming a doctor that doesn't apply to working at EA orgs. Namely:

Suppose I'm roughly as good at being a doctor as the next-doctor-up. My choosing to become a doctor brings about situation A over situation B:

Situation A: I'm a doctor, next-doctor-up goes to their backup plan
Situation B: next-doctor-up is a doctor, I go to my backup plan

Since we're equally good doctors, the only difference is in whose backup plan is better—so I should prefer situation B, in wh... (read more)

2
Sarah Eustis-Guthrie
3y
That's a good point, and I'm inclined to agree, at least on an abstract  level. My question then becomes how you evaluate what the backup plans of others are. Is this something based on data? Rough estimations? It seems like it could work on a very roughly approximated level, but I would imagine there would be a lot of uncertainty and variation.

I had the opposite takeaway from the podcast. Ajeya and Rob definitely don't come to a confident conclusion. Near the end of the segment, Ajeya says, referring definitely to the simulation argument but also I think to anthropics generally,

I would definitely be interested in funding people who want to think about this. I think it is really deeply neglected. It might be the most neglected global prioritisation question relative to its importance. There’s at least two people thinking about AI timelines, but zero people [thinking about simulation/anthropics], basically. Except for Paul in his spare time, I guess.

2
D0TheMath
3y
Ah, thanks. It was a while ago, so I guess I was misremembering.

When I first read it, I assumed that "meaningful, lasting change" meant "all the kinds of changes we want," rather than "any particular change." Maybe that's what the authors intended. But on rereading I think your interpretation is more correct.

Congrats! I don't know you but I'm very happy for you!

The networking was hard for me, and I often felt thrown off or wired up after my networking calls. It took me a long time to send each email.

I'm impressed you were able to persist in your job search while feeling this way. Did you have a particularly strong motivation toward your long-term goal, or were there other strategies you used to overcome these mental blockers?

8
new_staffer
3y
Thank you!  I do think there was a strong motivation. I was convinced that landing a Congressional role was an early career dream job for me because 1) with just a few years of investment I could be in a position to make policy change, much faster than other paths 2) even if it ended up being too hard/not what I expected, it is great career capital for political advocacy which is my Plan B 3)  I generally thought I would enjoy this type of work a lot. At one point, I decided to commit 100% to applying for Congressional jobs and give myself a 6 month deadline to do it. I'm not sure if I would've actually quit after 6 months, but the looming threat of 'total failure'  if I didn't get there was really terrifying motivating. Also, being unemployed/underemployed sucks and was pretty uncomfortable in and of itself. I knew networking  would be the key to landing a role, so day-to-day I just had to keep pushing myself to do it. I had faith that if I just kept at it, I couldn't fail.  It was also helpful to remember that networking is totally routine in Washington DC and in Congress. People really are generous and willing to help and receiving that help doesn't make you annoying. Emotionally, I never stopped feeling like a nuisance but it was good to know on an intellectual level that everything I was doing was normal. Also, if you ask questions that you are genuinely curious about in your meetings with people, it will make the meetings more interesting. This seems obvious, but it is easy to get caught in the trap of asking the same questions and hearing the same answers. And lastly, I  just sort of accepted that it was always going to be uncomfortable for me. So I just had to push past the 'discomfort' points, like pushing 'send' on the email, or the moments where I asked people for concrete things. 

 Just broaden your conception of the team to the whole EA community, and stop worrying about how much of the “credit” is yours.

To me, this is the crux. If you can flip that switch, problem (practically) solved—you can take on huge amounts of personal risk, safe in the knowledge that the community as a whole is diversified.

Easier said than done, though: by and large, humans aren’t wired that way. If there’s a psychological hurdle tougher than the idea that you should give away everything you have, it the idea that you should give away everything you ha... (read more)

3
AndrewDoris
3y
That's all good, intuitive advice. I'd considered something like moral luck before but hadn't heard the official term, so thanks for the link. I imagine it could also help, psychologically, to donate somewhere safe if your work is particularly risky. That way you build a safety net. In the best case, your work saves the world; in the worst case, you're earning to give and saving lives anyway, which is nothing to sneeze at. My human capital may best position me to focus my work on one cause to the exclusion of others. But my money is equally deliverable to any of them. So it shouldn't be inefficient to hedge bets in this way if the causes are equally good.

This was helpful to me (knowing nothing about climate policy) in terms of ideas about how to break down TSM's "transformative change" into more tractable parts. I guess I'd been treating "transformative change" and what Dan said about "fundamental uncertainty" as something like semantic stopsigns.

One thing I'm confused about:

Indeed, insofar as mass mobilization and climate grassroots activism are strongly tied to the Democratic party and making Democrats more ambitious on climate, it seems likely that the value of this advocacy has decreased due to the rel

... (read more)
4
jackva
3y
Thanks! To answer your question there are two pieces here: 1) Sunrise is most useful right now in pressuring Democrats, it is quite partisan and does not hold as much sway over Republicans. As such, when the overall situation is less Democrat-leaning, the usefulness of Sunrise is lower overall. Sunrise-candidates will not challenge Republican incumbents so an important mechanism of creating pressure on Democratic officials does not exist. 2)  Yes, of course Sunrise could hurt Democrats' election chances. This was, with regards to moderates and progressives more generally, an active debate after the disappointing (compared to expectations) election in November. One mechanism would be that pressure on Democratic candidates moves them closer to the left to deal with that pressure, which then reduces their election chances. Another mechanism would be that the progressive wing's perception hurts candidates in moderate/conservative districts. Going forward, a mechanism would be that the Sunrise/progressive agenda is perceived as a partisan overreach that leads to a "punishment" in the mid-terms.  Just to be sure, these are active debates within the party and I am not suggesting that the moderates blaming progressives are always right. I am just saying that when we form a distribution over outcomes of the goodness of Sunrise we should include those mechanisms as well, because they are a plausible part of the overall story (they are explanations held by many people, and the TSM analysis by GG does not refute it). That is a mechanism that pushes the EV of funding Sunrise down.  

Thanks for that clarification—maybe the $1m/year figure is distracting. I only mentioned it as an illustration of this point:

The post argues that the kind of talent valuable for direct work is rare. Insofar as that's true, the conclusion ("prefer direct work") only applies to people with rare talent.

Thanks, Mark! I've been struggling to figure out what career goals I myself should pursue, so I appreciated this post.

Those considering EtG as their primary career path might want to consider direct work instead

I think this advice is missing a very important qualification: if you are a highly talented person, you might want to consider direct work. As the post mentions, highly talented people are rare—for example, you might be highly talented if you could plausibly earn upwards of $1m/year.

Regularly talented people are in general poor substitutes for highl... (read more)

5
tamgent
3y
I think there are lots of opportunities for direct work at non-EA orgs with sufficient demand. 

I think this advice is missing a very important qualification: if you are a highly talented person, you might want to consider direct work. As the post mentions, highly talented people are rare—for example, you might be highly talented if you could plausibly earn upwards of $1m/year.

I expect this isn't what you're actually implying, but I'm a bit worried this could be misread as saying that most people who are sufficiently talented in the relevant sense to work at an EA org are capable of earning $1m/year elsewhere, and that if you can't, then you prob... (read more)

2
Mark Xu
3y
"Try and become very talented" is good advice to take from this post. I don't have a particular method in mind, but becoming the Pareto best in the world at some combination of relevant skills might be a good starting point. This is a good point. People able to competently perform work they're unenthusiastic about should, all else being equal, have an outsized impact because the work they do can more accurately reflect the true value behind the work.