All of toonalfrink's Comments + Replies

AGI Safety Fundamentals curriculum and application

I have added a note to my RAISE post-mortem, which I'm cross-posting here:

Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can ... (read more)

A Red-Team Against the Impact of Small Donations

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsour

... (read more)
How do EAs deal with having a "weird" appearance?

To me, reducing your weirdness is equivalent to defection in a prisoner's dilemma, where the least weird person gets the most reward but the total reward shrinks as the total weirdness shrinks.

Of course you can't just go all-out on weirdness, because the cost you'd incur would be too great. My recommendation is to be slightly more weird than average. Or: be as weird as you perceive you can afford, but not weirder. If everyone did that, we would gradually expand the range of acceptable things outward.

How many people should get self-study grants and how can we find them?

Cause if there is excess funding and less applicants, I'd assume such applicants would also get funding.

I have seen examples of this at EA Funds, but it's not clear to me whether this is being broadly deployed.

How many people should get self-study grants and how can we find them?

Let's interpret "study" as broad as we can: is there not anything that someone can do on their own initiative, and do it better if they have time, that increases their leadership capacity?

3Khorton22dThe best thing they can do is probably to lead a project, either through paid work or as a volunteer. Another good thing would be to speak to a mentor about their leadership work. When those two things are already happening, books or courses can be really useful. But without practicing leadership and getting regular feedback, I don't expect very good returns from independent study. (The exception would be someone who's already working at an executive level and wants to take a secondment for personal study and then return to a similar role - the fact that they've already gotten a lot of leadership experience and feedback makes me more positive about the value of them taking time off to study and reflect.)
How many people should get self-study grants and how can we find them?

I think the biggest constraint for having more people working on EA projects is management and leadership capacity. But those aren't things you can (solely) self-study; you need to practice management and leadership in order to get good at them.

What about those people that already have management and leadership skills, but lack things like:

  • Connections with important actors
  • Awareness of the incentives and the models of the important actors
  • Awareness of important bottlenecks in the movement
  • Background knowledge as a source of legitimacy
  • Skin in the game / a trac
... (read more)
2Khorton22dHey Toon, that's the kind of person I was talking about in my third paragraph (someone with a proven track record of a variable skill). Like I said, in most cases I'd expect this person to learn faster in a job context or with a grant for a particular project than simply "self-study" but I do think there are some cases where people with a good track record should apply for a self-study grant!
1toonalfrink22dLet's interpret "study" as broad as we can: is there not anything that someone can do on their own initiative, and do it better if they have time, that increases their leadership capacity?
How many people should get self-study grants and how can we find them?

There is also significant loss caused by moving to a different town, i.e. loss of important connections with friends and family at home, but we're tempted not to count those.

1AppliedDivinityStudies22dThat's true, but feels less deadweight to me. You have fewer friends, but that results in more time. You move out of one town, but into another with new opportunities.
5Greg_Colbourn22dMy original idea [] (quote below) included funding people at equivalent costs remotely. Basically no one asked about that. I guess because not many EAs have that low a living cost (~£6k/yr). And not that many could without moving to a different town (or country), and there isn't much appetite for that / coordination [] is difficult. Maybe we need a grant specifically for people to work on research remotely that has a higher living cost cap? Or a hierarchy of such grants with a range of living costs that are proportionally harder to get the higher the costs are.
What high-level change would you make to EA strategy?

I would train more grantmakers. Not because they're necessarily overburdened but because, if they had more resources per applicant, they could double as mentors.  

I suspect there is a significant set of funding applicants that don't meet the bar but would if they received regular high-quality feedback from a grantmaker.

(like myself in 2019)

List of EA funding opportunities

I'd recommend putting the airtable at the top of your post to make it the schelling point

Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

What would it have taken to do something about this crisis in the first place? Back in 2008, central bankers were under the assumption that the theory of central banking was completely worked out. Academics were mostly talking about details (tweaking the tailor rule basically). 

The theory of central banking is already centuries old. What would it have taken for a random individual to overturn that establishment? Including the culture and all the institutional interests of banks etc? Are we sure that no one was trying to do exactly that, anyway?

It seem... (read more)

1Venkatesh8moConsider this - say the EA figured out the number of people the problem could affect negatively (i.e) the scale. Then even if there is a small probability that the EA could make a difference shouldn't they have just taken it? Also even if the EA couldn't avert the crisis despite their best attempts they still get career capital, right? Another point to consider - IMHO, EA ideas have a certain dynamic of going against the grain. It challenged the established practices of charitable giving that existed for a long time. So an EA might be inspired by this and indeed go against the established central bank theory to work on a more neglected idea. In fact, there is at least some anecdotal evidence [] to believe that not enough people critique the Fed. So it is quite neglected.

I think your title could be a bit more informative.

Holden's writing seems to follow a hype cycle on the idea of transparency. i.e. first you apply a fresh new idea too radically, you run into it's drawbacks, then you regress to a healthy moderate application of it.

As someone who has felt some of the drawbacks of being outside this "inner ring", I wouldn't complain about the transparency per se. Lack of engagement, maybe, but that turned out to be me. 

I'm still waiting for concrete suggestions. I also think your project would be more fruitful if you interviewed these people in person and published the result.

Would removing the “crap” have been sufficient to make it polite? I like to be direct.

Yes, that would have been sufficient. The "withdraw this post" part seems a bit harsh (and redundant, since editing a post entails "withdrawing" the old version), but not to the point where I'd say anything about it as a mod.

I appreciate your engaging with my comment — it's hard to do mod stuff without coming across as overbearing, but I really value your contributions to the Forum. It's just a struggle to find balance between our more direct commenters and the people who find the Forum's culture intimidating.

How can I handle depictions of suffering better emotionally?

I can’t look inside your head, but if the mere thought of something makes you suffer, it probably means it reminds you of something that you are trying to ignore, i.e. trauma.

Assuming that this is indeed the case, I would further speculate that you are ignoring this memory or unpalatable insight because you subconsciously expect that thinking of it would disturb you to the point of getting in the way of whatever you would prefer to be doing, like idk, whatever your daily pursuits are.

The solution then, given these assumptions, would be to set aside some ti... (read more)

(lots of downvotes, so where are all the comments?)

I want to reward you for bringing up the topic of power dynamics in EA. Those exist, like in any community, but especially in EA there seems to be a strong current of denying the fact that EA's are constrained by their selfish incentives like everyone else. It requires heroism to go against that current.

But by just insinuating and not delivering any concrete evidence or constructive suggestions for change, you haven't really done your homework. I advise you to withdraw this post, cut out half the narrative crap, add some evidence and a model, make a recommendation, then repost it.

Aaron Gertler8moModerator Comment18

I advise you to withdraw this post, cut out half the narrative crap, add some evidence and a model, make a recommendation, then repost it.

Moderator here! 

This looks like it was intended to be tough love, but it's also a mild-to-moderate case of "unnecessary rudeness". 

Let's try to stay polite in our comments, especially when the issue at stake is "I think your post is unclear" rather than "I think this post will hurt people" or "I think this thing you want people to donate to is a scam".

4Milan_Griffes8moWhere are all the comments, indeed... "I advise you to withdraw this post, cut out half the narrative crap, add some evidence and a model, make a recommendation, then repost it." I think this is basically fair, though from my perspective the narrative crap is doing important work. I have limited capacity these days so I'm writing this argument as a serial, posting it as I can find the time. In the meanwhile, this sequence from a few years ago [] (a [] ) makes a similar argument following the form you suggest.
EA considerations regarding increasing political polarization

What does "cancelling" mean, concretely? I don't imagine the websites will be closed down. What will we lose?

Off the top of my head: The ability to host conferences without angry protesters in front, the chance to be mentioned in a favorable manner by a mainstream major news outlet and willingness of high profile people to associate with EA. Look up what EA intellectuals thought in the near past about why it would be unwise for EA to make too much noise outside the Overton window. This is still valid, except now the Overton has begun to shift at an increasing pace.

Note that this is not meant to be an endorsement of EA aligning with or paying lip service to political trends. I personally believe an increase of enforced epistemic biases to be an existential threat to the core values of EA.

EA considerations regarding increasing political polarization

I've been trying to figure out why cancel culture is so powerful. If only ~7% of people identify as pro social justice, why are social media platforms so freely bending to their will? Surely it's not out of the goodness of their hearts, what is the commercial motive? I don't buy the idea that it is simply a marketing stunt. Afaict a pro-SJ stance does not make a company look much more favorable at this point.

But then I found this:

... (read more)
2Dale1yI added up the numbers in the first article and got around $634m of total 2018 ad spend, vs 2019 facebook revenue of 70700bn - less than 1%. Many of those companies only say they are 'pausing' or 'for July', rather than stopping. Finally, a company that was re-considering its facebook ad spend for unrelated reasons might want to frame it as a moral stance. Perhaps principle-agent problems are at play; individual ideologues put SJ ahead of corporate profitability, and the much larger number of ordinary people are afraid of being bullied so do not speak out. But this is obviously not a full explanation.
Concerning the Recent 2019-Novel Coronavirus Outbreak

Re exercise: I worry that putting myself in a catabolic state (by exercising particularly hard) I temporarily increase my risk. Also by being at the gym around sweaty strangers. Is this worry justified?

2Sean_o_h2yI don't think so to any significant extent in most circumstances. And any tiny spike counterbalanced by general benefits pointed to by David. My understanding (former competitive runner) is that extended periods of heavily overdoing it with exercise (overtraining) can lead to an inhibited immune system among other symptoms, but this is rare with people generally keeping fit (other than e.g. someone jumping into marathon/triathlon training without building up). Other things to avoid/be mindful of are the usual (hanging around in damp clothes in the cold, hygiene in group sporting/exercise contexts etc).
Moloch and the Pareto optimal frontier

I like this model but I think a more interesting example can be made with different variables.

Imagine x and y are actually both good things. You could then claim that a common pattern is for people to be pushing back and forth between x and y. But meanwhile, we may not be at the Frontier at all if you add z. So let's work on z instead!

In that sense, maybe we are never truly at the frontier, all variables considered.

Related to this line of thinking: affordance widths

4MichaelA2yYour comment also reminded me of Robin Hanson's idea [] that policy debates are typically like tug of war between just two positions, in which case it may be best to "pull the rope sideways". Hanson writes: "Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy." That seems very similar to the idea that we may be at (or close to) the Pareto frontier when we consider only two dimensions, but not when we add a third, so it may be best to move towards the three-dimensional frontier rather than skating along the two-dimensional frontier.
2JustinShovelain2yNice! I would argue though that because we do not consider all dimensions at once generally speaking and because not all game theory situations ("games") lend themselves to this dimensional expansion we may, for all practical purposes, sometime find ourselves in this situation. Overall though, the idea of expanding the dimensionality does point towards one way to remove this dynamic.
Does climate change deserve more attention within EA?

If you take this model a step further, it suggests working on whatever the most tractable problem is that others are spending resources on, regardless of its impact, because that will maximally free up energy for other causes.

Sounds like something someone should simulate to see if this effect is strong enough to take into account.

1Derek2yIt's an interesting idea. Often the resource fungibility won't be huge so it may not make much difference, but in some cases it might. It also seems to assume that it will use fewer total resources than working on both problems less intensively for a longer period. I would guess that it would usually be more efficient to divide resources and work on problems simultaneously, in part due to diminishing returns to investment. E.g. shifting all AI researchers to climate change would greatly hinder AI research but perhaps not contribute much to climate change mitigation, even assuming good personal fit of researchers, since there are already lots of talented people working on the issue. But I've thought about this for less than 5 minutes so it might deserve a deeper dive. I'm not likely to do it, though.
Announcing the launch of the Happier Lives Institute
[Our] research group is investigating the most promising giving opportunities among mental health interventions in lower and middle-income countries.

Any reason why you're focusing on interventions that target mental health directly and explicitly, instead of any intervention that might increase happiness indirectly (like bednets)?

8MichaelPlant2yHello Toon. The reason we're starting with this is because it looks like it could be a more cost-effective way of increasing happiness than the current interventions effective altruists tend to have in mind (alleviating poverty, saving lives) and it hasn't been thoroughly investigated yet. Our plan is to find the most cost-effective mental health intervention and then see how that compares to the alternatives. I ran some initial numbers on this in a previous post [] on mental health. I'm not sure if that answers your question. From the fact that one intervention is more direct than another (i.e. there are fewer cause steps before the desired outcome occurs) doesn't necessarily imply anything about comparative cost-effectiveness.
Please use art to convey EA!

Can we come up with a list of existing pieces of art that come close to this? I don't expect good ideas to come from first principles, but there might be some type of art out there that is non-cringy and conveys elements of EA thinking properly.

I'll start with Schindler's list, and especially this scene, where the protagonist breaks down while calculating just how many more lives he could have saved if he had sold his car, his jewelry, etc.

2Jemma3yThe book this is based on, Schindler's Ark by Thomas Kenneally, is also great if you want to delve more into character psychology.
A Framework for Thinking about the EA Labor Market

Okay, you've convinced me that a US based EA organisation should consider raising their wages to attract top talent.

This data does make me doubt the wisdom of basing non-local activities in the US, but that is another matter.

A Framework for Thinking about the EA Labor Market

It does provide clarity, and I can imagine that there are unfortunate cases where those entry level salaries aren't enough.

As I said elsewhere in this thread, I think this problem would be best resolved simply by asking how much an applicant needs, instead of raising wages accross the board. The latter would cause all kinds of problems. It would worsen the already latent center/periphery divide in EA by increasing inequality, it would make it harder for new organisations to compete, it would reduce the net amount of people that we can employ, etc etc.

But I

... (read more)

The latter seems substantially better than the former by my lights (well, substituting 'across the board' for 'let the market set prices'.)

The standard econ-101 story for this is (in caricature) that markets tend to efficiently allocate scarce resources, and you generally make things worse overall if you try to meddle in them (although you can anoint particular beneficiaries).

The mix of strategies to soft-suppress (i.e. short of frank collusion/oligospony) salaries below market rate will probably be worse than not doing so - the usual p... (read more)

4Jon_Behar3yI think you’re right that these problems in would occur if a handful of orgs with the most money started raising salaries across the board in the current environment. But a commenter on [] FB [] summed up my econ 101 read on this perfectly (to reiterate I'm not an economist): “If the community can't afford market rates maybe it's time to start admitting that the community is funding constrained.”
A Framework for Thinking about the EA Labor Market

30 was just an arbitrary number. Is London still hard to live in for 60? Mind that the suggestion is to raise salaries from 75k to 100k. I can't imagine many cases where 75k is prohibitive, except for those that feel a need to be competitive with their peers from industry (which, fwiw, is not something I outright disapprove of)

We should probably operationalize this argument with actual data instead of reasoning from availability.

7Jon_Behar3yUsing NYC as an (admittedly US-centric and high cost of living) example, the average cost of private school [] is ~$18k/year, and many of the good ones are around $50k. So if you think of a couple that wants to have a couple of kids, doesn’t want to send them to a bad (possibly dangerous) public school, and would like to put those kids through college, it’s unlikely those people would even consider non-profit work unless they had unusual circumstances that would allow them to do so (e.g. one partner with particularly high earning power, a trust-fund, etc.)
A Framework for Thinking about the EA Labor Market

Given the numbers that we have in mind, these examples are all very specific to the US.

Medical expenses don't get much past $2k per year in most European countries. The only place where cost of living is prohibitively high past a ~$30k income, is San Francisco.

I'm not arguing against the idea that some people exist that should be given the $150k that is needed to unlock their talents. I'm arguing that this group of people might be very small, and concentrated in your bubble.

I think that's the crux of the argument. If a majority of senio... (read more)

7Jon_Behar3yVery well put. Agree this is the crux of our disagreement; my intuition is that there’s a much larger pool of people who would be enticed by the higher pay.
6Khorton3yI think it would be very difficult to raise a family in London on $30k (or even £30k). Rent for a family home in good repair in Zone 2 is like £2000 a month. So a £30k salary would only cover the rent of a place like that. To make £30k work, you'd have to live quite far away and have a long commute, which has a major impact on quality of life. I think that's true in many other major cities.
A Framework for Thinking about the EA Labor Market
a lot of resentment would emerge

To the extent that this would cause resentment, I'd interpret that as a perception of a higher counterfactual, which means that the execution wasn't done well.

A Framework for Thinking about the EA Labor Market

It's unclear to me what you mean with privilege. I'm trying to imagine a situation where making 75k is not enough for a low-privilege person, but I can't think of any. AFAIK 75k is an extremely high wage. I know a CEO of a bank that makes that.

Toonalfrink, I'm hesitant to provide a concrete definition of privilege because it's definitely an amorphous thing. That being said, since I know it does mean very different things in different countries, so I should have provided some context in my examples:

Employer Location: US major metropolitan city

Entry level salary/benefits: $35k; competitive health insurance; no 401k/403b (retirement fund) match; no maternity leave

Looking briefly as US dept of Education data, the median American student loan debt burden for those with a bachelor's deg... (read more)

2Jon_Behar3yQuestion: if all you knew about a bank was that the CEO made $75k a year, would that knowledge make you more or less likely to invest in that bank (from a purely financial perspective)? That would make me way less likely to invest.
2Jon_Behar3yHere’s a simple example: imagine that you, or someone you were responsible for taking care of, had medical expenses of $100k/year. In that case, $75k wouldn’t even let you break even, you’d still be taking on lots of debt. Other examples: you have debt, you have kids (and/or other relatives you’re financially responsible for), you live in a high cost of living location, or various other factors that have no relation to someone’s suitability for a job.
A Framework for Thinking about the EA Labor Market

Don't advertise the wage on the ad. Ask candidates how much they need to be satisfied, then give them that amount or the amount that they are economically worth to you, whichever is lower. Discourage employees from disclosing how much they make.

I find this highly problematic. Candidates who need money more (e.g. those with dependents) will assume a non-profit job won’t pay enough in the first place, and won’t even apply.

It’s also worth noting that we live in a historical context where discouraging employees from disclosing how much they make has been a strategy to suppress wages, often discriminatorily. (See here for why Open Cages has taken the opposite approach and embraced salary transparency.)

A Framework for Thinking about the EA Labor Market

In preventing wage dissatisfaction, I think it's better to look at perceived counterfactuals. This can come from being used to a certain wage, or a certain counterfactual wage being very obvious to you. Or it can come from your peers making a certain wage.

You seem to assume something like "people don't like to accept a wage that is lower than they can get". I suggest replacing that with "people don't like to accept a wage that is lower than they feel they can get".

I know some people that are deliberately keeping their income frozen at 15k so they won't get

... (read more)
3Jon_Behar3yAgree looking at perceived counterfactuals can be a helpful distinction. I don’t see freezing incomes at 15k as a sustainable or scalable solution, at least in the context of harnessing resources to work on the world’s largest problems. But I think this brings up an interesting point about loss aversion and path dependency. I’d argue (and I think you’re doing the same) that people are much more likely to freeze their income at 15k at the start of their careers, but much less likely to do so after they’ve already started earning more and would need to cut back to that level. Using @dgjpalmer’s experience as an example, I’d guess most of their Ivy League colleagues started off in the nonprofit industry rather than transitioning to it after some time in the private sector. And this dynamic introduces biases like shortages of skills that people pick up in the private sector. This sounds very difficult to execute well over time, and my guess is that a lot of resentment would emerge. And doesn’t solve selection bias problems, discussed more here [] .
Open Thread #44

I sometimes think about seeking funding outside of EA to increase the amount of available EA funding.

But I never made serious work of it. I have no idea what is available, or where to look. Governments? Foundations? With which ones does an Xrisk project have a chance? What's a good strategy for applying to them?

I'd be very happy if someone dived into this.

Psychedelics Normalization

You forgot ibogaine, which seems to be the most compelling example. According to lots of anecdotes across the internet, it reliably cures decades old addictions to heroin in a single sitting.

Still I don't think psychedelic use is necessarily a good thing. It makes people more open to experience, which for some will be a door to madness. See for example Scott Alexander's writings about it

You forgot ibogaine, which seems to be the most compelling example.

Psilocybin has also been very promising for treating addictions, including longstanding tobacco addiction (Johnson et al. 2017) and alcoholism (Bogenschutz et al. 2015).

It makes people more open to experience, which for some will be a door to madness.

Definitely agree that psychedelic use isn't for everyone.

Note, however, that the Openness result didn't replicate. Here's more detail on one failure to replicate:

(More research needed, as always.)

Does climate change deserve more attention within EA?

Another consideration comes to mind: climate change is currently taking up a large amount of attention from competent altruistic people. If the issue were to be solved or its urgency reduced, some of those resources might flow into EA causes.

2Derek2y"[AI safety] is currently taking up a large amount of attention from competent altruistic people. If the issue were to be solved or its urgency reduced, some of those resources might flow into [climate change mitigation]" So hurry up, Toon ;)
EA Hotel Fundraiser 4: Concrete outputs after 10 months

fwiw, I personally give it >75% probability that we will be able to survive at least until next round

Long-Term Future Fund: April 2019 grant recommendations

Am certainly open to considering this business model for the hotel.

Long-Term Future Fund: April 2019 grant recommendations

The hotel did apply.

The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

It's about $7500 per person per year

Long-Term Future Fund: April 2019 grant recommendations

As a potential grant recipient (not in this round) I might be biased, but I feel like there is a clear answer to this. No one is able to level up without criticism, and the quality of your decisions will often be bottlenecked by the amount of feedback you receive.

Negative feedback isn't inherently painful. This is only true if there is an alief that failure is not acceptable. Of course the truth is that failure is necessary for progress, and if you truly understand this, negative feedback feels good. Even if it's in bad faith.

Given that grantmakers are ess

... (read more)

I think on a post with 100+ comments the quality of decisions is more likely to be bottlenecked by the quality of feedback than the quantity. Being able to explain why you think something is a bad idea usually results in higher quality feedback, which I think will result in better decisions than just getting a lot of quick intuition-based feedback.

When should EAs allocate funding randomly? An inconclusive literature review.
For this specific post, I probably won't add a summary because my guess is that in this specific case the size of the beneficial effect doesn't justify the cost.

I still think you should write it. This looks like an important bit of information, but not worth the read, and I estimate a summary would increase the amount of readers fivefold.

The Case for the EA Hotel

I wrote that intense model, and I agree that it's not a good post. My apologies.

To be clear, I'm quite glad you attempted the model and I agree there's no need to apologize for it.

I'd like to push back slightly against the notion of "apologizing" for writing something that others found hard to understand. The EA Forum should be a place to try out different kinds of content, and even if some experiments don't work out, it's generally good that experiments happen.

(That said, if you're feeling apologetic, there's also no problem with apologizing! I just want others who see this to know that it's okay when a post doesn't work out.)

2Milan_Griffes3yNo worries :-)
The Case for the EA Hotel

I imagine EA's getting into all sorts of fields and industries while staying in the community, and this seems so valuable that it makes me second-guess the hotel.

People don't stay in the community because, if you're not involved professionally, there's not much left to gain. We should change that.

I've proposed a solution to this problem here and here

3Peter Wildeford3yI do worry that the EA Hotel gives people too easy of an excuse to not have to do the hard thing of finding something outside of EA. This kind of phenomenon came up in the comments of the widely acclaimed "getting a job in EA is hard" [] post [] .
$100 Prize to Best Argument Against Donating to the EA Hotel
I think part of why Y Combinator is so successful is because funding so many startups has allowed them to build a big dataset for what factors do & don't predict success. Maybe this could become part of the EA Hotel's mission as well.

Good idea. It will be somewhat tricky since we don't have the luxury of measuring success in monetary terms, but we should certainly brainstorm about this at some point.

$100 Prize to Best Argument Against Donating to the EA Hotel

Thank you.

With the hotel, I see a bunch of little hints that it's not worth my time to attempt an in-depth evaluation of the hotel's leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

Your posts suggests that there is some class of EA's that is a lot more competent than everyone else, which means that what everyone else is doing doesn't matter all that much. While I haven't met ... (read more)

8PeterMcCluskey3y>which means that what everyone else is doing doesn't matter all that much Earning to give still matters a moderate amount. That's mostly what I'm doing. I'm saying that average EA should start with the outside view that they can't do better than earning to give, and then attempt some more difficult analysis to figure out how they compare to average. And it's presumably possible to matter more than the average earning to give EA, by devoting above-average thought to vetting new charities.
The Case for the EA Hotel

I have burned out slightly, but this has happened every 6 months or so for the past 5 years, so it's probably not caused by the hotel.

Altruistic action is dispassionate

At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives. I find that counting myself and my needs into my moral framework makes my moral framework more robust.

Altruistic action is dispassionate

I'd argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you're just one agent, but that's certainly not a human universal.

2Matthew_Barnett3yYeah, there are many possible ways to frame this. I like the idea of a coherent agent, but that might just be the part of me capable of putting verbal thoughts on a forum page. In any case, over time I've experienced a shift from viewing preferences as different types which compete, to viewing preferences as all existing together in one coherent thread. Of course, my introspection is not perfect, but this is how I feel when I look inward to find what I really want. I do not claim that this is what other people feel. However, to the extent which I find the idea pleasing, I certainly would like if people shared my view.
EA Hotel Fundraiser 4: Concrete outputs after 10 months

I would rather not. This would pressure people into goodharting their projects for legibility, which is one of the things our setup is supposed to prevent.

(tldr: an agent is legible if a principal can easily monitor them, but it limits their options to what is easy for the principal to measure, which might reduce performance)

Quite a few of our guests are not even on this list, but this doesn't mean they're sitting around doing nothing all day. They're doing illegible work that is hard or even impossible to evaluate at a distance. I put a few... (read more)

Altruistic action is dispassionate

I realise that I've been implicitly assuming this is true, which made me resist optimizing for impressions. Doing that I could no longer convince myself that I was acting altruistically. The awful and hard to accept reality is that you sometimes do have to convince people in order for your work to be supported.

3Milan_Griffes3yFor sure. I think there's a complicated relationship between altruistic & egotistic motivations. Oftentimes you can have a larger post hoc positive impact by acting egotistically (because this increases your reputation, your deployable capital, and/or other relevant resources). So the egotistic motivation seems super important! I'm just pointing out that I've found it helpful to get more internal clarity on when I'm acting out of self-interest versus when I'm acting altruistically.
Why is the EA Hotel having trouble fundraising?
1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

Not yet, but it's certainly a project that is on our radar. We also want to find ways to measure innate talent, so that people can tell earlier whether AIS research would be a good fit for them.

Why is the EA Hotel having trouble fundraising?

I do think it affects their behavior, I just refuse to let it affect mine more than is strictly necessary, because I think it's a negative sum game.

Why is the EA Hotel having trouble fundraising?

Strong upvoted, and thank you, because finally someone is honest about their doubts. You're as critical in your speech as you are in your thoughts. This should be standard, but it's rare.

projects that seem pretty tragic like “writing a novel on AI alignment” and “writing a mobile game” - it’s a difficult balance here, unoccupied rooms are doing nothing for the hotel but equally I doubt indulging these sorts of things are valuable

This is what I understand to be hits-based giving. If you have 20 rooms, you can make these kinds of weird gambles, and... (read more)

5Khorton3yEAs are super into status. I'm really surprised you don't think it affects their behaviour at all.
$100 Prize to Best Argument Against Donating to the EA Hotel
I'm not at all convinced that the counterfactual would be working on their problems in solitude.

I wouldn't be convinced either, but we interviewed our guests and 15 out of 20 were already doing the same work before taking up residence at the hotel. They were either working parttime or burning through runway.

Load More