I don't object to folks vocalizing their outrage. I'd be skeptical of 'outrage-only' posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I'm requesting here.
This seems aimed at regulators; I'd be more interested in a version for orgs like the CIA or NSA.
Both those orgs seem to have a lot more flexibility than regulators to more or less do what they want when national security is an issue, and AI could plausibly become just that kind of issue.
So 'policy ideas for the NSA/CIA' could be at once both more ambitious and more actionable.
I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that's my personal experience of AI researchers who don't care about alignment. But if my experiences don't generalize, I agree that more explanation is necessary.
I definitely think "that's just one final safety to rely on" applies to this suggestion. I hope we do a lot more than this!
The idea here is to prepare for an emergency stop if we are lucky enough to notice things going spectacularly wrong before it's too late. I don't think there's any hamstringing of well-intentioned people implied by that!
I agree that private docs and group chats are totally fine and normal. The bit that concerns me is 'discuss how to position themselves and how to hide their more controversial views or make them seem palatable', which seems a problematic thing for leaders to be doing in private. (Just to reiterate I have zero evidence for or against this happening though.)
Thanks Arden! I should probably have said it explicitly in the post, but I have benefited a huge amount from the work you folks do, and although I obviously have criticisms, I think 80K's impact is highly net-positive.
I think you're correct that they aren't being dishonest, but I disagree that the discrepancy is because 'they're answering two different questions'.
If 80K's opinion is that a Philosophy PhD is probably a bad idea for most people, I would still expect that to show up in the Global Priorities information. For example, I don't see any reason they couldn't write something like this:
...In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy ... but the academic job mark
Upvoted. I think these are all fair points.
I agree that 'utilitarian-flavoured' isn't an inherently bad answer from Ben. My internal reaction at the time, perhaps due to how the night had been marketed, was something like 'ah he doesn't want to scare me off if I'm a Kantian or something', and this probably wasn't a charitable interpretation.
On the Elon stuff, I agree that talking to Elon is not something that should require reporting. I think the shock for me was that I saw Will's tweet in August, which as wock agreed implied to me they didn't know e...
EAs and Musk have lots of connections/interactions -- e.g., Musk is thanked in the acknowledgments of Bostrom's 2014 book Superintelligence for providing feedback on the draft of the book. Musk attended FLI's Jan 2015 Puerto Rico conference. Tegmark apparently argues with Musk about AI a bunch at parties. Various Open Phil staff were on the board of OpenAI at the same time as Musk, before Musk's departure. Etc.
This reads (at least to me) as taking a softer line than the original piece, so there's not as much I disagree with, and quite a lot that's closer to my own thinking too. I might add more later, but this was already a useful exchange for me, so thanks again for writing and for the reply! I have upvoted (I upvoted the original also), and I hope you find your interactions on here constructive.
Edit: One thing that seems worth acknowledging: I agree there is a distinctive form of 'meta-' reflection that is required if you want be meaningfully inclusive, and my...
Thanks for taking the time to write this up. I have a few reactions to reading it:
I just want to call out that this in itself isn't a valid criticism of EA, any more than it would be a valid criticism of the social movements that you favour. But I suspect you agree with this, so let's move on.
Simultaneously, EA is also a form of capitalism because it is founded on a need to maximize what a unit of resources like time, money, and labour can achieve
I think you've made a category error here. I hear yo...
I worry about our implicit social structures sending the message "all the cool people hang around the centrally EA spaces"
I agree that I don't hear EAs explicitly stating this, but it might be a position that a lot of people are indirectly commited to. e.g. Perhaps a lot of the community have a high degree of confidence in existing cause prioritization and interventions and so don't see much reason to look elsewhere.
I like your proposed suggestions! I would just add a footnote that if we run into resistance trying to implement them, it could be useful to g...
Though I do recognize this response reads like me moving the goal posts....
Yep, I think this is my difficulty with your viewpoint. You argue that there's no way to predict future human discoveries, and if I give you counterexamples your response seems to be 'that's not what I mean by discovery'. I'm not convinced the 'discovery-like' concept you're trying to identify and make claims about is coherent.
Maybe a better example here would be the theory of relativity and the subsequent invention of nuclear weapons. I'm not a physicist, but I would guess the scie...
This is as it must be with all human events
I think there are some straightforward counterexamples here:
I had not noticed that those aren't the same, thank you for correcting me! And I agree that applying to it makes a lot more sense than applying to the incubation program.
On this particular point
message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public
I can't find info on Rethink's site, is there anything you can link to?
Of the three best-performing messages you've linked, I think the first two emphasise risk much more heavily than longtermism. The third does sound more longtermist, but I still suspect the risk-ish phrase 'ensure a good future' is a large part of what resonates.
All that said, more info on the tests they ran would obviousl...
I suspect getting more people with diverse experiences/ideas interested in helping is a good approach. Then just let them do their thing.
I wrote a short piece here basically trying to argue EA should do more to diversify its skillpool as others have 'unseen data' that could help tackle important problems: https://forum.effectivealtruism.org/posts/MpYPCq9dW8wovYpRY/ea-undervalues-unseen-data .
tl;dr: I think more people == more data && more data == better ideas.
Did you consider applying to Charity Entrepreneurship career coaching?
Yep, and I might still do that, but I suspect what I have in mind isn't a good fit for the reasons mentioned in the post.
Curious about what resource specifically you have in mind!
I think resources for family/best friends/employers of mentally ill folks is a neglected space. You have a group of people who are extremely incentivised to help (maybe employers less so), have the opportunity for a high marginal impact, but who in my experience usually have no idea what they're doing.
I'm ...
Therefore, funders need to accept a high level of initial risk and be prepared to fund for some time before the highly effective label can be achieved.
Yep, I agree that this is the rub. There's been a lot of chat about megaprojects recently though (e.g. https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/), and building an ecosystem to fund high risk, high return projects of this sort could be a good candidate for that.
Data doesn't necessarily measure what's important to measure, so you need to be smart about harnessing data that is important to the problem you're solving. But to say that it never measures what's important to measure is straightforwardly false. For example, to believe that you'd have to write off all of modern science as 'unimportant'.
Conversely, people who have work/life balance can feel threatened by people who only care about effective altruism. If those people exist, does that mean you have to be one?
I experience a version of this. I think I'm very unlikely to feel fulfilled working on any high-priority issue without a clear work/life split, which makes me apprehensive of taking up a 'seat' that could have been taken by someone who'd have worked 80 hour weeks and vastly outperformed me.
I also have a softer concern about fitting in at companies that are mostly made up of dedicates: t...
Hi Yonatan, I actually got some 1:1 career advice from 80k recently, they were great! I'm also friends with someone in AI who's local to Montréal and who's trying to help me out. He works at MILA which has ties to a few universities in the city (that's kind of what inspired the speculative master's application). Thanks in advance for the referrals!
Now, I've always been very sceptical of these arguments because they seem to rely on nothing but intuition and go against historical precendent
What historical precedent do you have in mind here? The reason my intuitions initially would go in the opposite direction is a case study like invasive species in Australia.
tl;dr is when an ecosystem has evolved holding certain conditions constant (in this case geographical isolation), and that changes fairly rapidly, even a tiny change like a European rabbit can have negative consequences well beyond what was...
Thanks for your suggestions! Some answers:
1. Robust decision making. And yes, pretty much, I was thinking of the interpretations covered here: https://plato.stanford.edu/entries/probability-interpret.
2. I think formalizing this properly would be part of the task, but if we take the Impact, Neglectedness, Tractability framework, I'm roughly thinking of a decision-making framework that boosts the weight given to impact and lowers the weight given to tractability.
3. I was roughly thinking of an analysis of the approach used by exceptional participants in fore...
I think there's something epistemically off about allowing users to filter only bad AI news. The first tag doesn't have that problem, but I'd still worry about missing important info. I prefer the approach of just requesting users be vigilant against the phenomenon I described.