This is correct. We ended up needing to resolve a couple issues related to the inquiry before we can file. We’ve stayed in touch with the Charity Commission about the delay.
Additional restaurants I'd recommend:
I wanted to make some additional non-EA recommendations but don't want to blow up the comments section with non-EA stuff, so here's a thread for people to do that.
Additional restaurants I'd recommend:
Thanks for sharing. I think it was brave and I appreciated getting to read this. I'm sorry you've had to go through this and am glad to hear you're feeling optimistic.
This seems like an improvement to me. Thanks!
Feedback on a minor pain point for me. When I'm looking at quick takes on the front page and want to go to the permalink for the relevant take (e.g. to see all the discussion under it), I often look around for a big title to click on for a while before remembering that I'm supposed to click on the icon on the top right, which is small, doesn't stand out much, and feels too me like it's somehow violating an implicit expectation I have for where to find this for this kind of content.
I have no clue whether this is...
I think this was a cool post and I'm excited to see this kind of discussion here. (I think it misses a bunch of advantages of small orgs but seems fine to have a post that's mostly about the disadvantages. unfortunately don't have time to write out my object level thoughts here - just wanted to be clear that this comment is a "like" not a "(fully) agree." )
New grads are hired at L3, and almost everyone makes it to L4, typically within 2-3y. Most of them get to L5, typically 3-5y from then. L5 is a fine place to stay, and getting promoted above that is harder and is no longer most people. I was hired at L3 and got promoted at L6 after about 9y.
Looking at levels.fyi I see average total comp of:
I think this is across the whole US, though, and while I can't get it to show me Bay Area numbers right now, my memory is they are about 30% higher?
But seems like I sh...
I wonder if Jack would be equally happy with the weaker claim that giving 10% is not advisable for the median American in their twenties. I'm not sure whether I'd agree even with that but wm it seems more plausible to me than claiming it's not feasible.
And giving 10% could be not advisible (in the sense that it may not be the best possible use of the median 20s person's funds) but superior to their counterfactual use of the funds.
Hey Bob - Howie from EV UK here. Thanks for flagging this! I definitely see why this would look concerning so I just wanted to quickly chime in and let you/others know that we’ve already gotten in touch with relevant regulators about this and I don’t think there’s much to worry about here.
The thing going on is that EV UK has an extended filing deadline (from 30 April to 30 June 2023) for our audited accounts,[1] which are one of the things included in our Annual Return. So back in April, we notified the Charity Commission that we’ll be filing our Annu...
[Only a weak recommendation.] I last looked at this >5 years ago and never read the whole thing. But FYI that Katja Grace wrote a case study on the Asilomar Conference on Recombinant DNA, which established a bunch of voluntary guidelines that have been influential in biotech. Includes analogy to AI safety. (No need to pay me.) https://intelligence.org/files/TheAsilomarConference.pdf
Hi, thanks for raising these questions. I wanted to confirm that Effective Ventures has seen this and is looking into it. We take our legal obligations seriously and have started an internal review to make sure we know the relevant facts.
"In 1993, he obtained a bachelor's degree in radio from Emerson College in Boston,[4] where one of his professors was the writer David Foster Wallace"
https://en.wikipedia.org/wiki/Bill_Burr
Yes — since the first week of the crisis, Nick and Will have been recused from the relevant discussions / decisions on the boards of both EV entities to avoid any potential conflict of interest. Staff in both EV entities were informed about that decision in mid-November.
The 80k podcast also has some potentially relevant episodes though they're prob not directly what you most want.
My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.
[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.
Others, most of which I haven't fully read and not always fully on topic:
Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook).
I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.)
Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.
If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).
I like the way some of Joe Carlsmith's essays touch on this.
FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction.
...
Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these
I think the people responsible for EA Global admissions (including Amy Labenz, Eli Nathan, and others) have added a bunch of value to me over the years by making it more likely that a conversation or meeting with somebody at EA Global who I don’t already know will end up being productive. Making admissions decisions at EAG (and being the public face of an exclusive admissions policy) sounds like a really thankless job and I know a bunch of the people involved end up having to make decisions that make them pretty sad because they think it’s best for the wor...
I'm curious whether there's any answer AI experts could have given that would be a reasonably big update for you.
For example is there any level of consensus against ~AGI by 2070 (or some other date) that would be strong enough to move your forecast by 10 percentage points?
Good question. I think AI researchers views inform/can inform me. A few examples from the recent NLP Community Metasurvey. I would quote bits from this summary.
Few scaling maximalists: 17% agreed that Given resources (i.e., compute and data) that could come to exist this century, scaled-up implementations of established existing techniques will be sufficient to practically solve any important real-world problem or application in NLP.
This was surprsing and updated me somewhat against shorter timelines (and higher risk) as, for example, it clashes with ...
I definitely agree that takeaway would be a mistake. I think my view is more like "if the specifics of what MT says on a particular topic don't feel like they really fit your organisation, you should not feel bound to them. Especially if you're a small organisation with an unusual culture or if their advice seems to clash with conventional wisdom from other sources, especially in silicon valley.
I'd endorse their book as useful for managers at any org. A lot of the basic takeaways (especially having consistent one on ones) seem pretty robust and it would be surprising if you shouldn't do them at all.
Agree with a lot of this post. I lived in DC from 2008-2010 and various short periods before and after and overall I liked it (though I'd probably like it a bit less today and expect a lot of EAs to like it less than I did).
The features of DC that most affected me: -DC felt like a company town. This had advantages. I liked having tons of friends who were think tank analysts or worked on the Hill and were trying to change the world (though I suspect polarization has made the vibe a bit worse). It also had disadvantages. Relative to NYC (which I knew best at...
"I don't think they would put out material that fails to apply to them."
I think we mostly agree but I don't think that's necessarily true. My impression is that they mainly study what's useful to their clients and from what I can glean from their book, those clients are mostly big and corporate. I think they might fall outside of their main target audience.
+1 to Paul grahams essays.
[Unfortunately didn't have time to read this whole post but thought it was worth chiming in with a narrow point.]
I like Manager Tools and have recommended it but my impression is that some of their advice is better optimized for big, somewhat corporate organizations than for small startups and small nonprofits with an unusual amount of trust among staff. I'd usually recommend somebody pair MT with a source of advice targeted at startups (e.g. CEO Within though the topics only partially overlap) so you know when the advice differs and can pick between them.
Just making sure you saw Eli Nathan's comment saying that this year plus next year they didn't/won't hit venue capacity so you're not taking anybody's spot
tl;dr I wouldn't put too much weight on my tweet saying I think I probably wouldn't be working on x-risk if I knew the world would end in 1,000 years and I don't think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.
***
Nice post. I agree with the overall message of as well as much of Ben's comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are un...
I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don't currently seem to be overprioritized. I don't think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I'd guess less than 1 FTE on infinite ethics. And not a ton on rationality, either.
Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER's work seems less theoretical. But you might still think there's...
Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs.
Also, I think even people like this who haven't gone through the disillusionment pipeline are often a lot more uncertain about many (though not all) things than most newcomers would guess.
Thanks for writing this post. I think it improved my understanding of this phenomenon and I've recommended reading it to others.
Hopefully this doesn't feel nitpicky but if you'd be up for sharing, I'd be pretty interested in roughly how many people you're thinking of:
"I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, t...
Before writing the post, I was maybe thinking of 3-5 people who have experienced different versions of this? And since posting I have heard from at least 3 more (depending how you count) who have long histories with EA but felt the post resonated with them.
So far the reactions I've got suggest that there are quite a lot of people who are more similar to me (still engage somewhat with EA, feel some distance but have a hard time articulating why). That might imply that this group is a larger proportion than the group that totally disengages... but the group that totally disengages wouldn't see an EA forum post, so I'm not sure :)
"My best guess is that I don't think we would have a strong connection to Hanson without Eliezer"
Fwiw, I found Eliezer through Robin Hanson.
Agree they have a bunch of very obnoxious business practices. Just fyi you can change a seeing so nobody can see whose pages you look at.
I think Open Philanthropy has done some of this. For example:
...The Open Philanthropy technical reports I've relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of exp
Was this in the deleted tweet? The tweet I see is just him tagging someone with an exclamation point. I don't really think it would be accurate to characterise that as "Torres supports the 'voluntary human extinction' movement"
Thanks for writing this post. I think it's really to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.
ButI think "deferring to authority" is bad branding (as you worry about below) and I'm not sure your definition totally captures what you mean. I think it's probably worth changing even though I haven't come up with great alternatives.
Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexistin...
No worries! It takes a couple clicks to get there.