Maybe you’re suspicious of this claim, but if I think if you convinced me that JP working more hours was good on the margin, I could do some things to make it happen. Like have one saturday a month be a workday, say. That wouldn’t involve doing broadly useful life-improvements.
On “fresh perspective”, I‘m not actually that confident in the claim and don’t really want to defend it. I agree I usually take a while after a long vacation to get context back, which especially matters in programming. But I think (?) some of my best product ideas come after being a... (read more)
This is a good response.
A few notes on organizational culture — My feeling is some organizations should work really hard, and have an all-consuming, startup-y culture. Other organizations should try a more relaxed approach, where high quality work is definitely valued, but the workspace is more like Google’s, and more tolerant of 35 hour weeks. That doesn’t mean that these other organizations aren’t going to have people working hard, just that the atmosphere doesn’t demand it, in the way the startup-y org would. The culture of these organizations can be gentler, and be a pla... (read more)
How hard should one work?
Some thoughts on optimal allocation for people who are selfless but nevertheless human.
Baseline: 40 hours a week.
Tiny brain: Work more get more done.
Normal brain: Working more doesn’t really make you more productive, focus on working less to avoid burnout.
Bigger brain: Burnout’s not really caused by overwork, furthermore when you work more you spend more time thinking about your work. You crowd out other distractions that take away your limited attention.
Galaxy brain: Most EA work is creative work that benefits from:
The Andrew Critch interview is so far exactly what I’m looking for.
This all seems reasonable.
I was assuming that designing safe AI systems is more expensive than otherwise, suppose 10% more expensive. In a world with only a few top AI labs which are not yet ruthlessly optimized, they could probably be persuaded to sacrifice that 10%. But to try to convince a trillion dollar company to sacrifice 10% of their budget requires a whole lot of public pressure. The bosses of those companies didn't get there without being very protective of 10% of their budgets.
You could challenge that though. You could say that alignment was instrumentally useful for creating market value. I'm not sure what my position is on that actually.
Thanks for your answer. (Just to check, I think you are a different Steve Byrnes than the one I met at Stanford EA in 2016 or so?)
I do want to emphasize is that I don't doubt that technical AI safety work is one of the top priorities. It does seem like within technical AI safety research the best work seems to shift away from Agent Foundations type of work and towards neural-nets-specific work. It also seems like the technical problem does get easier in expectation if you have more than one shot. By contrast, I claim, many of the Moloch-style problems get harder.
I feel like your qualifying statement is only true of the last one?
I like this chain of reasoning. I’m trying to think of concrete examples, and it seems a bit hard to come up with clear ones, but I think this might just be a function of the bespoke-ness.
First off I want to say thanks for your Forum contributions, Tessa. I'm consistently upvoting your comments, and appreciate the Wiki contributions as well.
I'm pretty confident in information hazards as a concern that are/will be plausibly important, but in these cases and other cases I tend to be at least strongly tempted by openness, which does seem to make it harder to advocate for responsible disclosure. "You should strongly consider selectively disclosing dangerous information, only all of these contentious examples I think should be open."
I'm guessing you haven't seen, so let me show off the new signup flow!
Hi Larks, thanks for taking the time to comment. I think your continuum comment is a good contribution to the considerations. I’m going to run with that metaphor, and talk about where I think we should fall. I take this seriously and want to get this right.
I’ve drawn three possible lines for what utility the Forum will get from its position on the continuum. Maybe it’s not actually useful, maybe I just like drawing things. I guess my main point is that we don’t have to figure out the entire space, just the local one:
Anyway, the story for the (locally) impe... (read more)
I think this is great. I especially like the discussion of propaganda, which feels like an important model.
Seems right. I doubt it was deliberate.
What happens if you log in in incognito? Do you have any of these settings set?
I can't reproduce this, can you tell me what browser you were using, what settings you have for the allposts page, and whether you can still see the issue?
This is over.
Temporary site update: I've taken down the allPosts page. It appears we have a bot hitting the page, and it's causing the site to be slow. While I investigate, I've simply taken the page down. My apologies for the inconvenience.
That’s a bug, thanks for reporting.
You don't need to use the allPosts page to get a list of all the posts. You can just ask the GraphQL API for the ids of all of them.
You picked a good one here.
How many chicken years are affected per dollar spent on broiler and cage-free campaigns.
I estimate how many chickens will be affected by corporate cage-free and broiler welfare commitments won by all charities, in all countries, during all the years between 2005 and the end of 2018. According to my estimate, for every dollar spent, 9 to 120 years of chicken life will be affected.
My impression is that cage free campaigns have been very successful and there's much less low-hanging fruit, such that I don't think it's reasonable to extrapolate those results to an ongoing basis.
This is now a thing
I turned this into a non-question post for you. (Aaron didn't know I could do that, because it's not a normal admin option.)
Thanks! That's very much the sort of thing that's helpful.
Those are some pretty compelling numbers, but I'd be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they're engaged with EA ideas in other ways, but now we're into territory where I'd feel like I'd need to do more vetting.)
Posting as an individual who is a consultant, not on behalf of my employer
Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).
First let me voice a +1 to everything Jeremy has said here already - with the possible exception that I know several McKinsey partners are interfacing with the EA movement on part... (read more)
Hi, one such consultant checking in! I had this post open from the moment I saw it in this week's EA Forum digest, but... I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.
I'm a member of, but don't run, the EACN network and my take is it's a group of consultants interested in EA with highly varied degrees of familiarity / interest: from "oh, I think I've heard of GiveWell?" to "I'm only working here because... (read more)
Thanks Pablo and Joseph!
If you're a person who wants to learn this material, but doesn't have an Anki habit, I'd recommend taking this as an opportunity to try things, and give it a go. Turn remembering things into a deliberate choice.
You can get started here.
This was really good.
I will absolutely study that deck.
And VaccinateCA was very impressive.
Any mistakes are the fault of Linch Zhang
:D Good line. I hope you snuck this in and Linch didn’t notice.
Thanks for writing this! I like the aptitudes framing.
With respect to software engineering, I would add that EA orgs hiring web developers have historically had a hard time getting the same level of engineering talent as can be found at EA-adjacent AI orgs.* I have a thesis that as the EA community scales, the demand for web developers building custom tools and collaboration platforms will grow as a percentage of direct work roles. With the existing difficulty in hiring and with most EAs not viewing web development as a direct work path, I expect the short... (read more)
I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There's a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That's not to say it's useless, but if someone's medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.
Thanks for writing this post! I'm a fan of your work and am excited for this discussion.
Here's how I think about costs vs benefits:
I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.
I think I can have a tractable impact on reducing that vulnerability. It doesn't seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out — wh... (read more)
See also: Effective Altruism is an Ideology not (just) a Question.
Not endorsed by me, personally. I wouldn't call someone "not EA-aligned" if they disagreed about all of the worldview claims you made, but really care about understanding if someone is genuinely trying to answer the Question.
Sorry about the delay. I've fixed the issue and have reset the date of posting to now.
I fixed a bug that was causing this post to get underweighted by the frontpage algorithm, and have reset the date of posting to now, to correct for the period where it wouldn't have showed up on the frontpage.
Voting on edits is recently in the pipeline. In the mean time you can comment on the tag, which gives the author public recognition.
At least in software, there's a problem I see where young engineers are often overly bought-in to hype trains, but older engineers (on average) stick with technologies they know too much.
I would imagine something similar in academia, where hot new theories are over-valued by the young, but older academics have the problem you describe.
I think of a difference between posts-that-are-motivating, and posts-about-motivation. I'd be sad if there wasn't a place to go for posts-that-are-motivating, that was mostly that thing.
This post probably qualifies, but I didn't love it. I'd pay out if you wrote a good one. But see note about my bar being high, I definitely don't want to make promises.
I think sometimes they can write into the donation various stipulations around how fast they sell it. If you were looking to avoid scrutiny, you might take advantage of that.
I'd be happy to keep this tag and the others, so that someone interested in the topic as a whole can subscribe to any new posts tagged Sentience & Consciousness.
Note: that tag is currently a wiki-only tag/wiki page, but could be turned into a proper tag if desired.
Reasonable because the generality, though I think the cryptography ship has long, long since sailed.
Seems good. Maybe we should crosspost one of the recent articles on Sam Bankman-Freid.
Made worse by the fact that at Pablo's request, I deduplicated the Longtermism (Philosophy) tag, with the Longtermism wiki-entry.