All of calebp's Comments + Replies

I found this text particularly useful for working out what the program is.

When
 

  • Program Timeline: 1 December 2025 – 1 March 2026 (3 months)
  • Rolling applications begin 31 October 2025
  • If you are accepted prior to December 1st, you can get an early start with our content!
  • Extension options: You have the option to extend the program for yourself for up to two additional months – through to 1 June 2026. (We expect several cohort participants to opt for at least 1 month extensions)​​​​​​​​​​
program_timeline_no_bg.png

Where

 

  • Remote, with all chats and content located on the Supercyc
... (read more)
4
Tee
Thank you! Yeah I was trying to keep the post short. There's a table of contents right at the top of the linked page where you can find everything quickly

I don't think there's a consensus on how the average young person should navigate the field


 Yeah that sounds right, I agree that people should have a vibe "here is a take, it may work for some and not others - we're still figuring this out" when they are giving career advice (if they aren't already. Though I think I'd give that advice for most fieldbuilding, including AI safety so maybe that's too low a bar.

I'm curious about whether other people who would consider themselves particularly well-informed on AI (or an "AI expert") found these results surprising. I only skimmed the post, but  I asked Claude to generate some questions based on the post for me to predict ⁠⁠⁠⁠⁠⁠⁠ answers to, and I got a Brier score of 0.073, ⁠⁠⁠⁠⁠⁠⁠ so I did pretty well (or at least don't feel worried about being wildly out of touch). I'd guess that most people I work with would also do pretty well.

  1. ^

    I didn't check the answers, but Claude does pretty well at this kind of thing.

  2. ^

    t

... (read more)

I'm not an expert in this space; @Grace B, who I've spoken to a bit about this, runs the AIxBio fellowship and probably has much better takes than I do. Fwiw, I think I have a different perspective to the post. 

My rough view is:
1. Historically, we have done a bad job at fieldbuilding in biosecurity (for nuanced reasons, but I guess that we made some bad calls).
2. As of a few months ago, we have started to do a much better job at fieldbuilding e.g. the AIxBio fellowship that you mentioned is ~the first of it's kind. The other fellowships you ... (read more)

3
rjain 🔹
You make some great points!  For 4), I agree that the AI safety world has done a really good job field-building, both because of funders in/out of the EA space, and I wonder how much of it truly is transferable to biosecurity/climate/other x-risk fields. Perhaps someone should write a piece on that.  1 & 6) I don't mean to say that people are intentionally giving bad advice or that it's dishonest on purpose. However, when it comes to asking people for advice, I agree that 5) that it's hard for experienced folks to know exactly how to help young people without a ton of context. Regardless, I don't think there's a consensus on how the average young person should navigate the field. (And interesting idea about the 'career strategist'! I wonder who would be the best audience for this sort of coaching?)

In your opinion, how many weeks before the event would get 80% of the value of knowing two years in advance?

2
Simon Newstead 🔸
I think the peak of value could be around the 4-6 months mark.  Some examples of events I was involved in: Agri-Food Innovation Summit: June - first public look at speakers / program published Aug - full agenda and all speakers confirmed Nov - event Food Frontier: May - speakers and program announced Oct - event Too far out and it's hard to know what your priorities might be, lock-in risk etc.

In general, I think many people who have the option to join Anthropic could do more altruistically ambitious things, but career decisions should factor in a bunch of information that observers have little access to (e.g. team fit, internal excitement/motivation, exit opportunities from new role ...).[1] Joe seems exceptionally thoughtful, altruistic, and earnest, and that makes me feel good about Joe's move.

I am very excited about posts grappling with career decisions involving AI companies, and would love to see more people write them. Thank you very... (read more)

I have lots of disagreements with the substance of this post, but at a more meta level, I think your post will be better received (and is a more wholesome intellectual contribution) if you change the title to "reasons against donating to Lightcone Infrastructure" which doesn't imply that you are trying to give both sides a fair shot (though to be clear I think posts just representing one side are still valuable).

3
MikhailSamin
Reasonable! Thanks.

A lack of sufficiently strategic, dedicated, and ambitious altruists. Deferrence to authority figures in the EA community when people should be thinking more independently. Suboptimal status and funding allocation etc. 

2
Peter
Would be curious to hear more. I'm interested in doing more independent projects in the near future but am not sure how they'd be feasible. 

sorry - school for moral ambition

Quick, non-exhaustive list of places where a few strategic, dedicated, and ambitious altruists could make a significant dent within a year (because, rn, EA is significantly dropping the ball).

Improving the media, China stuff, increasing altruism, moral circle expansion, AI mass movement stuff, frontier AI lab insider coordination (within and among labs), politics in and outside the US, building up compute infrastructure outside the US, security stuff, EA/longtermist/School for Moral Ambition/other field building, getting more HNW people into EA, etc.

(List originally shared with me by a friend)

4
kuhanj
Will's list from his recent post has good candidates too:  * AI character[5] * AI welfare / digital minds * the economic and political rights of AIs * AI-driven persuasion and epistemic disruption * AI for better reasoning, decision-making and coordination * the risk of (AI-enabled) human coups * democracy preservation * gradual disempowerment * biorisk * space governance * s-risks * macrostrategy * meta
3
Peter
What do you think is causing the ball to be dropped?
2
Bella
What's "SMA" in this context?

I suggested the following question for Carl Shulman a few years ago

I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).

I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.

https://forum.effective... (read more)

4
Lukas_Gloor
Related LW thread: https://www.lesswrong.com/posts/XYYyzgyuRH5rFN64K/what-makes-people-intellectually-active

Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to "longtermism" and it's relatively easy to convince people that x-risk/AIS/whatever is important.

I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don't think all longtermists do this, but longtermism e... (read more)

A few scattered points that make me think this post is directionally wrong, whilst also feeling meh about the forum competition and essays:

  • I agree that the essay competition doesn't seem to have surfaced many takes that I thought were particularly interesting or action-guiding, but I don't think that this is good evidence for "talking about longtermism not being important".
  • There's a lot of things that I would describe as "talking about longtermism" that seem important and massively underdiscussed (e.g. acausal trade and better futures-y things). I think yo
... (read more)
3
cb
Thanks for commenting! I've tried to spell out my position more clearly, so we can see if/where we disagree. I think: * Most discussion of longtermism, on the level of generality/abstraction of "is longtermism true?", "does X moral viewpoint support longtermism?", "should longtermists care about cause area X?" is not particularly useful, and is currently oversupplied. * Similarly, discussions on the level of abstraction of "acausal trade is a thing longtermists should think about" are rarely useful. * I agree that concrete discussions aimed at "should we take action on X" are fairly useful. I'm a bit worried that anchoring too hard on longtermism lends itself to discussing philosophy, and especially discussing philosophy on the level of "what axiological claims are true", which I think is an unproductive frame. (And even if you're very interested in the philosophical "meat" of longtermism, I claim all the action is in "ok but how much should this affect our actions, and which actions?", which is mostly a question about the world and our epistemics, not about ethics.) * "though I'd be like 50x more excited about Forethought + Redwood running a similar competition on things they think are important that are still very philosophy-ish/high level." —this is helpful to know! I would not be excited about this, so we disagree at least here :) * "The track record of talking about longtermism seems very strong" —yeah, agree longtermism has had motivational force for many people, and also does strengthen the case for lots of e.g. AI safety work. I don't know how much weight to put on this; it seems kinda plausible to me that talking about longtermism might've alienated a bunch of less philosophy-inclined but still hardcore, kickass people who would've done useful altruistic work on AIS, etc. (Tbc, that's not my mainline guess; I just think it's more like 10-40% likely than e.g. 1-4%.) * "I feel like this post is more about "is convincing people to be longermists import

Yeah I also think hanging out in a no 1:1s area is weirdly low status/unexciting. I’d be a bit more excited about cause or interest specific areas like “talk about ambitious project ideas”.

calebp
*48
10
18
1
1

(Weakly) Against 1:1 Fests

I just returned from EAG NYC, which exceeded my expectations - it might have been the most useful and enjoyable EAG for me so far.

Ofc, it wouldn’t be an EAG without inexperienced event organisers complaining about features of the conference (without mentioning it in the feedback form), so to continue that long tradition here is an anti-1:1s take.

EAGs are focused on 1:1s to a pretty extreme degree. It’s common for my friends to have 10-15 30 minute 1:1s per day, at other conferences I’ve been to it’s generally more like 0-5. I woul... (read more)

3
Mick
I've "only" been to 2 EAGs and 4 EAGx's so take this with that as context For previous EAGs I always booked my schedule full of 1-1's to ask people about their experience, resolve uncertainties, and just generally network with people in similar roles. This EAG (NYC 2025) I didn't find as many people on Swapcard that I wanted to talk to and received much less requests for 1-1s, so I also ended up having just 7 1-1s in total. This was a fun experiment. I found it much more relaxed, and I enjoyed being able to have spontaneous conversations with people I ran into, but I think overall I got less value out of this EAG than if I had booked more meetings: I have less actionable insights and met less people than during other EAG(x) conferences I have attended. However, I'm definitely in favour of less 1-1 cramming. I do think if this was one of my first EAGs and I didn't know anyone, I would've been quite lost without the structure of the 1-1's and the explicit encouragement that it is normal to book a lot. I also feel weird about just joining a conversation in case it was people having a private 1-1. Having an improved spontaneous conversations area with bigger signs/cause area specific areas (or time slots?) sounds like a great solution for both of these problems. Tangentially, my favourite meetups are also those where you just stand and mingle, ideally with specific areas in specific corners, rather than do forced speed meets or roundtable discussions. This makes it much easier to leave if you don't like a conversation and move on to a different one until you find one you like.

My impression is EAGx Prague 22 managed to balance 1:1s with other content simply by not offering SwapCard 1:1s slots part of the time, having a lot of spaces for small group conversations, and suggesting to attendees they should aim for something like balanced diet. (Turning off SwapCard slots does not prevent people from scheduling 1:1, just adds a little friction; empirically it seems enough to prevent the mode where people just fill their time by 1:1s).

As far as I understand this will most likely not happen, because weight given to / goodharting on met... (read more)

4
claerer
I completely agree and have tried so set up some small group meetings - you can have up to 24 people in one 1:1 meeting on swapcard. This works especially well if you create a 1:1 with one person and then add others rather than immediately creating one with 24 people because the latter does not show you whether invitees are actually free at the planned time while the former does.  On the organiser side, it might be cool to move the cause area/work specific meetups earlier in the conference.

(written v quickly, sorry for informal tone/etc)

i think that a happy medium is getting small-group conversations (that are useful, effective, etc) of size 3–4 people. this includes 1-1s, but the vibe of a Formal, Thirty Minute One on One is a very different vibe from floating through 10–15, 3–4-person conversations in a day, each that last varying amounts of time.

  • much more information can flow with 3-4 ppl than with just 2 ppl
  • people can dip in and out of small conversations more than they can with 1-1s
  • more-organic time blocks means that particularly unhelp
... (read more)
5
Chris Leong
I've tried pushing for this without much success unfortunately. It really is a lot more effort to have spontaneous conversations when almost all pairs are a one-on-one and almost all people by themselves are waiting for a one-on-one. I've seen attempts to declare a space an area that's not for one-on-ones, but people have one-on-ones there anyway. Then again, organisers normally put up one or two small signs. Honestly, the only way to stop people having one-on-ones in the area for spontaneous conversation might be to have an absurd number of big and obvious signs.

I’m not sure I understood the last sentence. I personally think that a bunch of areas Will mentioned (democracy, persuasion, human + AI coups) are extremely important, and likely more useful on the margin than additional alignment/control/safety work for navigating the intelligence explosion. I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it. 

I also don’t think that having higher odds of AI x-risk are a crux, though different “shapes” of intelligence ... (read more)

I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it. 

Sure, but the vibe I get from this post is that Will believes in that a lot less than me, and the reasons he cares about those things don't primarily route through the totalizing view of ASI's future impact.  Again, I could be wrong or confused about Will's beliefs here, but I have a hard time squaring the way this post is written with the idea that he intended to communicate that people should w... (read more)

the AI safety group was just way more exciting and serious and intellectually alive than the EA group — this is caricatured,


Was the AIS group led by people that had EA values or were significantly involved with EA?

5
cb
Yes, at least initially. (Though fwiw my takeaway from that was more like, "it's interesting that these people wanted to direct their energy towards AI safety community building and not EA CB; also, yay for EA for spreading lots of good ideas and promoting useful ways of looking at problems". This was in 2022, where I think almost everyone who thought about AI safety heard about it via EA/rationalism.)

I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact. 

I agree with some parts of your comment  though it’s not particularly  relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.

3
Tristan Katz
I think this may have been a misunderstanding, because I also misunderstood your comment at first. At first you refer simply to the people who play the biggest role in shaping AGI - but then later (and in this comment) you refer to people who contribute most to making AGI go well - a very important distinction!  
4
David T
Doesn't this depend on what you consider the "top tier areas for making AI go well" (which doesn't seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing "AI doom" via stuff you consider to be non-harmful, then naively I'd expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they're the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won't be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options. If you define it as "areas which have the most influence on how AI is built" then those are more the people @titotal was talking about, and yeah, they don't seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds. And if you define "safety" more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don't consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who've decided the "safest" approach to AI is to win the arms race.  Similarly, it's no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don't become AI researchers at all, despite the similarities of their moral views.

I don’t think the opposite of (i) is true.

 Imagine a strong fruit loopist, believes there’s an imperative to maximise total fruit loops. 

If you are not a strong fruit loopist there’s no need to minimise total fruit loops, you can just have preferences that don’t have much of an opinion on how many fruit loops should exist (I.e. everyone’s position). 

Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.

Plausibly useful feedback, but I think this is ~0 evidence for how much faith you should have in blue dot relative to factors like reach, content, funding, materials, testimonials, reputation, public writing, past work of team members... If. I were doing a grant evaluation of Blue Dot, it seems highly unlikely that this would make it into the eval

There's definitely some selection bias (I know a lot of EAs), but anecdotally, I feel that almost all the people who, in my view, are "top-tier positive contributors" to shaping AGI seem to exemplify EA-type values (though it's not necessarily their primary affinity group).

Some "make AGI go well influencers" who have commented or posted on the EA Forum and, in my view, are at the very least EA-adjacent include Rohin Shah, Neel Nanda, Buck Shlegeris, Ryan Greenblatt, Evan Hubinger, Oliver Habryka, Beth Barnes, Jaime Sevilla, Adam Gleave, Eliezer Yudkowsky, ... (read more)

I would say the main people "shaping AGI" are the people actually building models at frontier AI companies. It doesn't matter how aligned "AI safety" people are if they don't have a significant say on how AI gets built.

 I would not say that "almost all" of the people at top AI companies exemplify EA-style values. The most influential person in AI is Sam Altman, who has publicly split with EA after EA board members tried to fire him for being a serial liar. 

On a related note, I happened to be thinking about this a little today as I took a quick look at what ~18 past LTFF who were given early career grants are doing now, and at least 14 of them are doing imo clearly relevant things for AIS/EA/GCR etc. I couldn't quickly work out what the other four were doing (though I could have just emailed them or spent more than 20 minutes total on this exercise). 

For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch), though I don't think this should be much of an update for others, especially when thinking about the EA community more comprehensively.

For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch)

Really? I think it would be the opposite: LTFF grantees are the most persistent and accomplished applicants and are therefore the least likely to end up as bycatch.

Also, not all LTFF funding was used to make videos. Rob has started/supported a bunch of other field-building projects

We do fund a small amount of non AI/bio work so it seems bad to rule those areas out.

 It could be worth bringing more attention to the breakdown of our public grants if the application distribution is very different to the funded one, I’ll check next week internally to see if that’s the case.

3
FJehn
I meant specifically mentioning that you don't really fund global catastrophic risk work on climate change, ecological collapse, near-Earth objects (e.g., asteroids, comets), nuclear weapons, and supervolcanic eruptions. Because to my knowledge such work has not been funded for several years now (please correct me if this is wrong). And as you mentioned that status quo will continue, I don't really see a reason to expect that the LTFF will start funding such work in the foreseeable future.  Thanks for wanting to check in if there is a difference between the public grants and the application distribution. Would be curious to hear the results. 
Answer by calebp8
1
0

We evaluate grants in other longtermist areas but you’re correct that it’s rare for us to fund things that aren’t AI or bio (and biosecurity grants more recently have been relatively rare).  We occasionally fund work in forecasting, macrostrategy, and fieldbuilding. 

It’s possible that we’ll support a broader array of causes in the future but until we make an announcement I think the status quo of investigating a range of areas in longtermism and then funding the things that seem most promising to us (as represented by our public reporting) will persist.

3
FJehn
Thanks for the clarification. In that case I think it would be helpful to state on the website that the LTFF won't be funding non AI/biosecurity GCR work for the foreseeable future. Otherwise you will just attract applications which you would not fund anyway, which results in unnecessary effort for both applicants and reviewers.

Roles unlocking funds should ideally be paid more until the point where increasing earnings by 1 $ only increases funds by 1 $.


Do you think in real life that's a sensible expectation, or are you saying that's how you wish it worked?

2
Vasco Grilo🔸
Both. I do not have reasons to believe organisations are under or overspending on fundraising. Some organisations say they have a hard time finding people who are a good fit for fundraising (being "talent-constrained"), but I think this only means there are steep diminishing returns on spending more on fundraising by increasing the earnings of possible fundraising roles. It does not mean they are underspending on fundraising. In general, I think it is sensible to at least have a prior expectation that the various activities on which an impact-focussed organisation can spend more money on have similar marginal cost-effectiveness. Otherwise, they would be leaving impact on the table by not moving money from the least to the most cost-effective activities at the margin. At the same time, I expect to find inefficiencies after learning more.

I think I follow and agree with "spirit" of the reasoning, but don't think it's very cruxy. I don't have cached takes on what it implies for the people replying to the EA survey.

Some general confusions I have that make this exercise hard:
* not sure how predictive choice of org to work at is of choice of org to donate to, lots of people I know donate to the org they work at because they think it's the best, some donate to think they think are less impactful (at least on utilitarian grounds) than the place they work (e.g. see CEA giving season charity recs) ... (read more)

2
Vasco Grilo🔸
Thanks for the good points, Caleb. I am assuming people would donate to organisations which are more cost-effective than their own in expectation because donating to ones which are less cost-effective would decrease their impact. This still leaves open the possibility of people donating to their own organisation (or asking to earn less), but they selected this partly for personal fit reasons which do not apply to donations, so I would expect most unbiased people to think there are other organisations which are more cost-effective than their own. Roles unlocking funds should ideally be paid more until the point where increasing earnings by 1 $ only increases funds by 1 $.

But if they're really sort of at all different, then you should really want quite different people to work on quite different things.

 

I agree, but I don't know why you think people should move from direct work (or skill building) to e2g. Is the argument that the best things require very specialised labour, so on priors, more people should e2g (or raise capital in other ways) than do direct work?

I don’t understand why this is relevant to the question of whether there are enough people doing e2g. Clearly there are many useful direct impact or skill building jobs that aren’t at ea orgs. E.g. working as a congressional staffer.

I wouldn’t find it surprising at all if most EAs are a good fit for good non e2g roles. In fact, earning a lot of money is quite hard, I expect most people won’t be a very good fit for it.

I think we’re talking past each other when we say “ea job”, but if you mean job at an ea org I’d agree there aren’t enough roles for everyone... (read more)

This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g.


Idk I feel like you can get a decent sense of this from running hiring rounds with lots of work tests. I think many talented EAs are looking for EA jobs, but often it's a question of "fit" over just raw competence.

> My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations profess... (read more)

4
Jason
  For the significant majority of EAs, does there exist an "EA job" that is a sufficiently good fit as to be superior to the individual's EtG alternative? To count, the job needs to be practically obtainable (e.g., the job is funded, the would-be worker can get it, the would-be worker does not have personal characteristics or situations that prevent them from accepting the job or doing it well). I would find it at least mildly surprising for the closeness of fit between the personal characteristics of the EA population and the jobs available to be that tight.[1]   1. ^ For most social movements, funding only allows a small percentage of the potentially-interested population to secure employment in the movement (such as clergy or other religious workers in a religious movement. So they do not face this sort of question. But I'd be skeptical that (e.g.) 85% of pretty religious people are well-suited to work as clergy or in other religious occupations.
1
mhendric🔸
I agree that we shouldn't use e2g as a shorthand for skillmaxing.    I am less optimistic about the 'fit' vs raw competence point. It's not clear to me that a good fit for the work position can easily be gleaned by work tests - a very competent person may be able to acquire that 'fit' within a few weeks on the job, for example, once they have more context for the kind of work the organization wants. So even if the candidates at the point of hiring looked very different, their comparison may differ unless we imagine both in an applied job context, having learned things they did not know at the time of hiring. I am more broadly worried about 'fit' in EA hiring contexts, because as opposed to markers of raw competence, 'fit' provides a lot of flexibility for selecting traits that are relatively tangential to work performance and often unreliable. For example, value-fit might select for hiring likeminded folks who have read the same stuff the hiring manager has, and reduce epistemic diversity. A fit for similar research interests reduces epistemic diversity and locks in certain research agendas for a long time. A vibe-fit may select simply for friends and those who have internalized norms. A worktest that is on an explicitly EA project may select for those already more familiar with EA, even if it would be easy for an outsider candidate to pick up on basic EA knowledge quickly if they got the job.  My impression is that overall, EA does have a noticeable suboptimal tendency to hire likeminded folks and folks in overlapping social circles (i.e. friends; friends of friends). Insofar as 'fit' makes it easier to justify this tendency internally and externally, I worry that it will lead to suboptimal hiring. I acknowledge we may have very different kinds of 'fit' in mind here. I do think the examples I provide above do exist in EA hiring decisions.   I haven't done hiring rounds for EA, so I may be completely wrong - maybe your experience has been that after a few work
calebp
62
5
3
2
30% ➔ 50% disagree

The percentage of EAs earning to give is too low


(I wasn't going to comment, but rn I'm the only person who disagrees)

Some reasons against the current proportion of e2g'ers being too low.
* There aren't many salient examples of people doing direct work that I want to switch to e2g.
* Doing direct work gives you a lot more exposure to great giving opportunities.
* Many people doing direct work I know wouldn't earn dramatically more if they switched to e2g.
* Most people doing e2g aren't doing super ambitious e2g (e.g. earning putting themselves in a position to ... (read more)

3
Vasco Grilo🔸
Hi Caleb, Donating 10 % more of one's gross earnings to an organisation 10 times as cost-effective as one one could join is 10 (= 0.1*10/0.1) times as impactful as working there if the alternative hire would be 10 % less impactful? If you agree, do you have any thoughts on what is implied by it, and the distribution of cost-effectiveness across the jobs of people replying to the EA Survey?
3
Ian Turner
I would argue that this work was highly net-negative, possibly so bad as to offset all the positive benefits of EA.
4
Nathan Young
It feels like if there were more money held by EAs some projects would be much easier: * Lots of animal welfare lobbying * Donating money to the developing world * AI lobbying * Paying people more for work trials I don't know if there are some people who are much more suited to earning than to doing direct work. It seems to me they're quite similar skill sets. But if they're really sort of at all different, then you should really want quite different people to work on quite different things.

This is a cool list. I am unsure if this one is very useful:

* There aren't many salient examples of people doing direct work that I want to switch to e2g.
 

This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.

This reason I think bites both ways:

* E2g is often less well optimised for learning... (read more)

This is really cool. I suspect that you'd make it a lot easier to find users if you didn't need either side of the bet to be at all familiar with crypto. How hard would it be to accept venmo/paypal/bank transfers?

1
bob
I agree that crypto is not the most user-friendly option, but more traditional options would almost certainly increase the regulatory and administrative burden on arbiters. On the regulatory side: the use of crypto allows arbiters to never have full custody over funds. The law obviously hasn't caught up with crypto yet, but we don't think anyone will argue arbiters passively appointed by bettors are hosting games of chance. If arbiters actually agree beforehand to receive funds from both parties, and they even have the option to run away with the funds, we expect regulators to give them a lot more scrutiny (and rightfully so). On the administrative side: arbiters have no risk of assets being frozen because their payment processor mistakes them for an online casino, there's no risk of arbiters' personal finances and stakes commingling, etc. We tried to keep the crypto aspect as simple as possible. We use a stablecoin (1 xDAI = 1 USD) that can also be used for the (negligible) transaction fees, and there is no need to set up allowances or other weird stuff. Still, we won't deny it's a hurdle.
calebp
2
1
0
60% agree

Depopulation is Bad


Though I don’t think it’s as big a deal as x-risk or factory farming. Main crux is probably the effect on factory farming, as is the case with many economic growth influencing interventions. 

This is cool, it would be great if you could play around with it before making an account - I expect you to lose a lot of potential users at the make account before trying stage.

2
Jared Stivala
Thanks Caleb, I agree. Website has been updated so you no longer need auth

Fwiw I think one of the main barriers to this work is having good engineers who want to work on this. If you are a great engineer, consider working at Amodo - they are hiring!

Seems plausible that EA Funds should explore offering matches to larger projects that it wants to fund to help increase the project’s funding diversity.

Matching campaigns get a bad rep in EA circles* but it’s totally reasonable for a donor to be concerned that if they put in lots of money into an area other people won’t donate, and matching campaigns preserve the incentive for others to donate, crowding in funding.


* I agree that campaigns claiming you’ll have twice the impact as your donation will be matched are misleading.

4
Ian Turner
Have you read Holden's classic on this topic? It sounds like you are describing what he calls "Influence matching".
4
Jason
It's understandable for a donor to have that concern. However, I submit that this goes both ways -- it's also reasonable for smaller donors to be concerned that the big donors will adjust their own funding levels to account for smaller donations, reducing the big donor's incentives to donate. It's not obvious to me which of these concerns predominates, although my starting assumption is that the big donors are more capable of adjusting than a large number of smaller donors. Much electronic ink has been spilled over the need for more diversification of funding control. Given that, I'd be hesitant to endorse anything that gives even more influence over funding levels to the entities that already have a lot of it. Unless paired with something else, I worry that embracing matching campaigns would worsen the problem of funding influence being too concentrated.
2
calebp
Seems plausible that EA Funds should explore offering matches to larger projects that it wants to fund to help increase the project’s funding diversity.

Thanks, this is a great response. I appreciate the time and effort you put into this.

I'm not sure it makes sense to isolate 2b and 3b here - 1a can also play a role in mitigating failure (and some combination of all three might be optimal).

I just isolated these because I thought that you were most interested in EA orgs improving on 2b/3b, but noted.

I'd be curious to see a specific fictional story of failure that you think is:
* realistic (e.g. you'd be willing to bet at unfavourable odds that something similar has happened in the last year)
* seems very bad (e.g. worth say 25%+ of the org's budget to fix)
* is handled well at more mature charities with better governance
* stems from things like 2b and 3b

I'm struggling to come up with examples that I find compelling, but I'm sure you've thought about this a lot more than I have.

5
Stephen Robcraft
A couple come to mind but, if you'll allow it, I would first respond to your prompt(s) with: * I don't think there are loads of examples of organisations with better governance (boards are weird, after all) overall - I'd argue that EA norms and practices lead to better governance, relative to traditional nonprofits, in some respects and worse in others. Nonprofits could generally do governance better. * I'm not sure it makes sense to isolate 2b and 3b here - 1a can also play a role in mitigating failure (and some combination of all three might be optimal) The two stories that come to mind both seem realistic to me (I'd take the bet these have happened recently) but might not meet your bar for 'very bad'. However, I'd argue we can set the bar a bit higher (lower? depends how you look at it....) and aim for governance that mitigates against more mundane risks, providing the trade-off makes sense. I think it does. Story 1 - A new-ish EA project/org has received 12 months of funding to do [something]. At the end of the 12 months, [something] has not been achieved but the money has been spent. In this story, the funder has accepted that they are making a bet, that there's some level of experimentation going on, that there are lots of uncertainties and assumptions etc. However, in this story, it was perfectly possible for [something] to be delivered, or for some equally impactful [something else] to be identified and delivered. Neither happened, but the team has spent most, if not all, of its funding and has just failed to deliver. They might have a compelling story about what they'll do next year and get more funding, they might not.  (1a) With a well-run Board of Trustees (made up of impartial, experienced, connected and credentialed people) overseeing the work of the less experienced project team and holding them to account, I think it's reasonable to imagine the team gets clearer, quicker about their objectives and how to deliver on these; more effectively moni

Time differences to a few other locations.

AI company employees, consider visiting New Zealand. The time zone looks workable, and I'm sure you won't be distracted.

1
emburrr
absolutely! as an early bird i achieve an almost full day overlap with PST :)

This is cool. Do you have a sense of the contractor rate range?

4
kierangreig🔸
Thanks, Caleb, glad to hear that! Contractor rates will depend on experience, project scope, and the nature of the work. As a rough guide: for experienced independent contributors, we typically expect rates in the range of $50–$150 USD per hour. For more junior contributors, the range is likely closer to $20–$50. That said, we’re open to a range of expectations, especially where there’s a strong fit.
calebp
14
0
0
13

Why would I listen to you? You don't even have an English degree.

calebp
34
14
8
4

In my opinion, one of the main things that EA / rationality / AI safety communities have going for them is that they’re extremely non-elitist about ideas. If you have a “good idea” and you write about it on one of the many public forums, it’s extremely likely to be read by someone very influential. And insofar as it’s actually a good idea, I think it’s quite likely to be taken up and implemented, without all the usual status games that might get in the way in other fields.

While I agree that it's not "elitist" in the sense that anyone can put forward ideas and be considered by significant people in the community (which I think is great!), I would say there's still some expectations that need to be met in that the "good idea" generally must accept several commonly agreed up premises that represent what I'd call the "orthodoxy" of EA / rationality / AI safety.

For instance, I noticed way back when I first joined Less Wrong that the Orthogonality Thesis and Instrumental Convergence are more or less doctrines, and challenging the... (read more)

7
geoffrey
Agreed. I’d extend the claim past ideas and say that EA is very non-elitist (or at least better than most professional fields) at any point of evaluation.  Maybe because of that, it seems more elitist elsewhere. 

I might call that "meritocratic about ideas".

This is true, and I've appreciated it personally. I've been pleasantly surprised how people have responded to a couple of things I've written, even when they didn't know me from a bar of soap. I think this was unlikely to happen in academia or in the high brow public health world where status games often prevail like you said.

There is still though an element of being "known" which helps your ideas get traction. This does make sense as if someone has written something decent in the past, there's a higher chance that other things they write may also be decen... (read more)

I think from the inside they feel the same. Have you spoken to people who in your view have drifted? If so how did they describe how it felt?

7
NickLaing
People I know who In my opinion have "drifted" (quite a lot of people) are generally unaware of what's happened as it all happens so slowly and "normal" life takes over, or if they are aware they don't really want to talk about it much. My experience though is from community advocacy/social justice circles in my early 20s (I'm now 38), not from EA circles. 

The flip side of “value drift” is that you might get to dramatically “better” values in a few years time and regret locking yourself into a path where you’re not able to fully capitalise on your improved values. 

Every now and then I'm reminded of this comment from a few years ago: "One person's Value Drift is another person's Bayesian Updating"

I probably agree with this idea, but I wouldn't label it it "value drift" myself.

From my perspective I would call what  you're describing more keeping a scout mindset around our values, and trying to ever improve.

"Value drift" for me signals the negative process of switching off our moral radar and almost unconscious drifting towards the worlds norms of selfishness, status, blissful ignorance etc. Reversion towards the mean. Hence the the "drift". I'm not sure I've ever seen someone drift their way to better values. Within the church I have seen big d... (read more)

Unfortunately I feel that culturally these spaces (EEng/CE) are not very transmissible to EA-ideas and the boom in ML/AI has caused significant self-selection of people towards hotter topics.

Fwiw I have some EEE background from undergrad and I spend some time doing fieldbuilding with this crowd and I think a lack of effort on outreach is more predictive of the lack of relevant people at say EAGs as opposed to AI risk messaging not landing well with this crowd.

I have updated upwards a bit on whistleblowers being able to make credible claims on IE. I do think that people in positions with whistleblowing potential should probably try and think concretely about what they should do, what they'd need to see to do it, and who specifically they'd get in contact with, and what evidence might be compelling to them (and have a bunch of backup plans).

a. An intelligence explosion like you're describing doesn't seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point. 

I'm not exactly sure what you mean by discontinuous jump. I expect the usefulness of AI systems to be pretty "continuous" inside AI companies and "discontinuous" outside AI companies. If you think that:
1. model release cadence will s... (read more)

2
Ozzie Gooen
I understand that there are some reasons that companies might do this. One 1/2/3, I'm really unsure about the details of (2). If capabilities accelerate, but predictably and slowly, I assume this wouldn't feel very discontinuous. Also, there's a major difference between AIs getting better and them becoming more useful. Often there are diminishing returns to intelligence.  > I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I'm not sure why this is implied by my post. I think my model isn't especially sensitive to single vs multiple competing IEs, but possible you're seeing something I'm not.   Sorry, I may have misunderstood that. But if there is only one or two potential actors, that does seem to make the situation far easier. Like, it could be fairly clear to many international actors that there are 1-2 firms that might be making major breakthroughs. In that case, we might just need to worry about policing these firms. This seems fairly possible to me (if we can be somewhat competent).  I'd expect that market caps of these companies would be far higher if it were clear if there would be less competition later, and I'd equivalently expect these companies to do (even more) R&D.  I'm quite sure investors are quite nervous about the monopoly abilities of LLM companies.  Right now, I don't think it's clear to anyone where OpenAI/Anthropic will really make money 5+ years from now. It seems like [slightly worse AIs] often are both cheap / open-source and good enough. I think that both companies are very promising, but just that the future market value is very unclear. I've heard that some of the Chinese strategy is, "Don't worry too much about being on the absolute frontier, because it's far cheaper to just copy from 1-2 steps behind." I wasn't saying that "competition would greatly decrease the value of the marginal intelligence gain" in the sense of "things will get worse from where we are now", but in the sense of "t
Load more