All of tcelferact's Comments + Replies

I think there's something epistemically off about allowing users to filter only bad AI news. The first tag doesn't have that problem, but I'd still worry about missing important info. I prefer the approach of just requesting users be vigilant against the phenomenon I described.

I don't object to folks vocalizing their outrage. I'd be skeptical of 'outrage-only' posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I'm requesting here.

Your post more than meets my requested criteria, thank you!

I agree with this. Where there is a tradeoff, err on the side of truthfulness.

This seems aimed at regulators; I'd be more interested in a version for orgs like the CIA or NSA. 

Both those orgs seem to have a lot more flexibility than regulators to more or less do what they want when national security is an issue, and AI could plausibly become just that kind of issue. 

So 'policy ideas for the NSA/CIA' could be at once both more ambitious and more actionable.

2
Zach Stein-Perlman
1y
Interesting. Do you know of existing sources related to 'policy ideas for the NSA/CIA'? What can I read to learn about this?

I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that's my personal experience of AI researchers who don't care about alignment. But if my experiences don't generalize, I agree that more explanation is necessary.

I definitely think "that's just one final safety to rely on" applies to this suggestion. I hope we do a lot more than this!

The idea here is to prepare for an emergency stop if we are lucky enough to notice things going spectacularly wrong before it's too late. I don't think there's any hamstringing of well-intentioned people implied by that!

I agree that private docs and group chats are totally fine and normal. The bit that concerns me is 'discuss how to position themselves and how to hide their more controversial views or make them seem palatable', which seems a problematic thing for leaders to be doing in private. (Just to reiterate I have zero evidence for or against this happening though.)

5
RobBensinger
1y
I think it's good to discuss those topics internally at all, though I agree with you that EAs should generally stop hiding their controversial views (at least insofar as these are important for making decisions about EA-related topics), and I think we should be more cautious about optimizing for palatability (exactly because it can be hard to do this much without misleading people).

Thanks Arden! I should probably have said it explicitly in the post, but I have benefited a huge amount from the work you folks do, and although I obviously have criticisms, I think 80K's impact is highly net-positive.

5
Arden Koehler
1y
That's kind of you to say : )

I think you're correct that they aren't being dishonest, but I disagree that the discrepancy is because 'they're answering two different questions'. 

If 80K's opinion is that a Philosophy PhD is probably a bad idea for most people, I would still expect that to show up in the Global Priorities information. For example, I don't see any reason they couldn't write something like this:

In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy ... but the academic job mark

... (read more)

Upvoted. I think these are all fair points. 

I agree that 'utilitarian-flavoured' isn't an inherently bad answer from Ben. My internal reaction at the time, perhaps due to how the night had been marketed, was something like 'ah he doesn't want to scare me off if I'm a Kantian or something', and this probably wasn't a charitable interpretation.

On the Elon stuff, I agree that talking to Elon is not something that should require reporting. I think the shock for me was that I saw Will's tweet in August, which as wock agreed implied to me they didn't know e... (read more)

EAs and Musk have lots of connections/interactions -- e.g., Musk is thanked in the acknowledgments of Bostrom's 2014 book Superintelligence for providing feedback on the draft of the book. Musk attended FLI's Jan 2015 Puerto Rico conference. Tegmark apparently argues with Musk about AI a bunch at parties. Various Open Phil staff were on the board of OpenAI at the same time as Musk, before Musk's departure. Etc.

This reads (at least to me) as taking a softer line than the original piece, so there's not as much I disagree with, and quite a lot that's closer to my own thinking too. I might add more later, but this was already a useful exchange for me, so thanks again for writing and for the reply! I have upvoted (I upvoted the original also), and I hope you find your interactions on here constructive.

Edit: One thing that seems worth acknowledging: I agree there is a distinctive form of 'meta-' reflection that is required if you want be meaningfully inclusive, and my... (read more)

Thanks for taking the time to write this up. I have a few reactions to reading it:

EA as a consequence of capitalism

I just want to call out that this in itself isn't a valid criticism of EA, any more than it would be a valid criticism of the social movements that you favour. But I suspect you agree with this, so let's move on.

EA as a form of capitalism

Simultaneously, EA is also a form of capitalism because it is founded on a need to maximize what a unit of resources like time, money, and labour can achieve

I think you've made a category error here. I hear yo... (read more)

5
Matthew_Doran
2y
Hi tcelferact,  Thank you for taking the time to engage so deeply with my essay. I apologise for the delay in replying. I’ve been on holiday since I posted, and unfortunately I’ve been unable to reply as fully as I wanted until now. I’ll offer some thoughts and responses I have after reading your valuable comments.    EA as a form of capitalism: I agree that EA does not try to hold onto the material resources that pass through its actors. I also agree that all social movements must accumulate extra-monetary forms of capital, such as knowledge, social capital and political buy-in.  What I want to question, however, is how the EA movement processes its resources in ways that facilitate and mimic capitalism. You state that incentivising for public goods is the core problem under capitalism. I concur, but my argument is that EA makes it easier for their disincentivization because it tries to make the process of addressing externalities as efficient as possible, conducted privately. EA plugs the gaps caused by structural economic inequality (like unequal currency exchange or the lack of reparations for centuries of slavery), rather than centring these as the fundamental issues at stake. System-wide problems cannot be fixed overnight, and I agree there is a moral duty to alleviate suffering most efficiently. Yet, my concern is that EA becomes myopic because of its intense focus on the latter.    EA as a facilitator of capitalism: The major thrust of my piece is to argue that the aid movement is structurally embedded within capitalist priorities of the Global North, even if it aims to be as effective as possible within this paradigm. I do not argue that aid is being used to disingenuously manipulate public opinion or that EA is a better vehicle than any other for hoodwinking the public. Critical theory is not about conspiracy, but about providing tools to unpick the naturalization of power. Throughout my piece, I am also clear that we should never neglect people i
-4
Sharmake
2y
I actually think that the fact that they used critical theory in a non-moral context is very serious evidence that this article is a hit piece, and the claim that there is no objective evidence is a favorite claim of people who's beliefs wouldn't stand up to the objective evidence that doesn't favor their argument. Essentially, what this post is done is come in with an argument against EA and capitalism with the bottom line precomputed already, then denies that objective evidence can exist, probably because the evidence that is there doesn't support his thesis that capitalism is bad, and supports the opposite thesis that capitalism is good. It's a hit piece against EA. See these links for more details: A note is that if we consider all sentient beings, the curve of welfare does turn severely negative for animals under capitalism thanks to factory farming and habitat destruction, and without massive change the conclusion that capitalism has harmed sentient life outside humanity would probably hold. I do think there will be massive change, but unfortunately this century may not change much. https://pubs.aeaweb.org/doi/pdfplus/10.1257/089533003769204335 https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/

I worry about our implicit social structures sending the message "all the cool people hang around the centrally EA spaces"

I agree that I don't hear EAs explicitly stating this, but it might be a position that a lot of people are indirectly commited to. e.g. Perhaps a lot of the community have a high degree of confidence in existing cause prioritization and interventions and so don't see much reason to look elsewhere.

I like your proposed suggestions! I would just add a footnote that if we run into resistance trying to implement them, it could be useful to g... (read more)

Though I do recognize this response reads like me moving the goal posts....

Yep, I think this is my difficulty with your viewpoint. You argue that there's no way to predict future human discoveries, and if I give you counterexamples your response seems to be 'that's not what I mean by discovery'. I'm not convinced the 'discovery-like' concept you're trying to identify and make claims about is coherent.

Maybe a better example here would be the theory of relativity and the subsequent invention of nuclear weapons. I'm not a physicist, but I would guess the scie... (read more)

1
astupple
2y
Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters - they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps. Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer - he was familiar with the state of atomic physics and therefore many of the relevant discoveries - he even dedicated the book to an atomic scientist. And Wells's "atomic bombs" were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It's pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don't think we are seeing knowledge of discoveries before they are discovered. Szilard's prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus's predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).  And this is also the case with discoveries in the long term from now. Objections to my post read to me like "but people have forecasted things shortly before they have appeared." True, but those forecasts already have much of the relevant discoveries alrea

This is as it must be with all human events

I think there are some straightforward counterexamples here:

  • Superforecasters. You mentioned Tetlock; the best forecasters in his study are consistently better than average at predicting geopolitical events. Those events are influenced by people.
  • Speech and language development in infants. There's a lot of discovery going on here, and we've spent enough time observing it that we can be pretty confident about how long it will take (link).
  • Periodic table. Maybe you are most interested in new discoveries. Mendeleev succ
... (read more)
1
astupple
2y
I have to look at Tetlock again - there's a difference between predicting what will be determined to be the cause of Arafat's death (historical, fact collecting) and predicting how new discoveries in the future will affect future politics. Nonetheless, I wouldn't be surprised that some people are better than others at predicting future events in human affairs. An example would be predicting that Moore's Law holds next year. In such a case, one could understand the engineering that is necessary to improve computer chips, perhaps understanding that production of a necessary component will half in price next year based on new supplies being uncovered in some mine. This is more knowledge of slight modifications of current understanding (basically, engineering vs. basic science research). It's certainly important and impressive, but it's more refining existing knowledge rathe rather than making new discoveries. Though I do recognize this response reads like me moving the goal posts.... Nice point about human development... I'm not sure how it relates. It seems to me this is biology playing out at a predictable pace. I'd bet that the elements of language development that are not dependent on biology vary greatly in their timelines, and the regularity that this research is discovering is almost purely biological. If we had the technology to do so, we could alter this biological development, and suddenly the old rules about milestones would fail. Put another way - reproducible experiments in psychology tell us about physiology of the brain, but nothing about minds, because mental phenomena are not predictable. The periodic table is a perfect example of what I'm talking about - Mendelev discovered the periodicity, and then was able to predict features of the natural world (that certain chemical properties would conform to this theory.) So, periodicity was the discovery, and fitting in the elements just conformed to the original discovery.   Here's another way to put my a

I had not noticed that those aren't the same, thank you for correcting me! And I agree that applying to it makes a lot more sense than applying to the incubation program.

On this particular point

message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public

I can't find info on Rethink's site, is there anything you can link to? 

Of the three best-performing messages you've linked, I think the first two emphasise risk much more heavily than longtermism. The third does sound more longtermist, but I still suspect the risk-ish phrase 'ensure a good future' is a large part of what resonates.

All that said, more info on the tests they ran would obviousl... (read more)

I suspect getting more people with diverse experiences/ideas interested in helping is a good approach. Then just let them do their thing.

I wrote a short piece here basically trying to argue EA should do more to diversify its skillpool as others have 'unseen data' that could help tackle important problems: https://forum.effectivealtruism.org/posts/MpYPCq9dW8wovYpRY/ea-undervalues-unseen-data .

tl;dr: I think more people == more data && more data == better ideas.

Did you consider applying to Charity Entrepreneurship career coaching?

Yep, and I might still do that, but I suspect what I have in mind isn't a good fit for the reasons mentioned in the post.

Curious about what resource specifically you have in mind!

I think resources for family/best friends/employers of mentally ill folks is a neglected space. You have a group of people who are extremely incentivised to help (maybe employers less so), have the opportunity for a high marginal impact, but who in my experience usually have no idea what they're doing. 

I'm ... (read more)

2
Lorenzo Buonanno
2y
The career coaching seems different from the incubation program, as far as I can tell your points apply mostly to the latter, right?

Therefore, funders need to accept a high level of initial risk and be prepared to fund for some time before the highly effective label can be achieved.

Yep, I agree that this is the rub. There's been a lot of chat about megaprojects recently though (e.g. https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/), and building an ecosystem to fund high risk, high return projects of this sort could be a good candidate for that.

Data doesn't necessarily measure what's important to measure, so you need to be smart about harnessing data that is important to the problem you're solving. But to say that it never  measures what's important to measure is straightforwardly false.  For example, to believe that you'd have to write off all of modern science as 'unimportant'.

1
Benjamin Start
2y
I agree. Data has different meanings and uses- priors are forms of data. Right now I see data primarily as a tool of persuasion. It's relevance varies across fields- data in psychology is very different from data in physical sciences. Like you mentioned, it's accuracy depends on the people creating and conducting the study. Modern science is dissatisfying to me, with persuasion being one of the problems I have with it. Even the commenting guidelines in this reply say "aim to explain, not persuade" While I would never write off all of modern science, one of the projects I'm working on is an alternative to academia. One of the goals is to use data in much more progressive ways then it is used now. 

Conversely, people who have work/life balance can feel threatened by people who only care about effective altruism. If those people exist, does that mean you have to be one?

I experience a version of this. I think I'm very unlikely to feel fulfilled working on any high-priority issue without a clear work/life split, which makes me apprehensive of taking up a 'seat' that could have been taken by someone who'd have worked 80 hour weeks and vastly outperformed me.

I also have a softer concern about fitting in at companies that are mostly made up of dedicates: t... (read more)

5
AmritSidhu-Brar
2y
As a fellow non-dedicate,  I like to discuss expectations around working hours in the "any questions" section of an interview anyway, since personally I wouldn't want to accept a job where they expect a lot more than a 40-hour week from me. That way, they also get this info about me to use in their decision, so I know if they make me an offer they think I'm the best candidate, having considered these factors.  I think being open like this is probably the best way to treat this area of uncertainty (rather than not applying), since the employer will have the better overview of other candidates. (EDIT: To be clear, I don't think it's necessary to raise this at this stage: the employer seems unlikely to assume that applicants will work more than a standard working week by default, since many people don't do that. And I don't think it makes sense for the burden to be on people who will only work a standard working week to raise that in the recruitment process. I just mean that if you're concerned about the effect of accepting a job where you'll perform less well because of sticking to standard hours, I think discussing it with the employer before accepting is a good way to handle that.)   I think  that having people with clear work/life split around can also be helpful. Partly since it helps make the culture more welcoming to other such people and, as Ozymandias argues, being open to non-dedicates is often helpful. But I also think the added diversity of perspectives can be helpful for everyone: for example it could help dedicates have a better work/life balance, in cases where they're too far towards the "work" end on pure-impact grounds. For example, they might not naturally think of ideas for work/life boundaries that, after they're raised,  they would endorse on impact grounds. (I don't think it's clearly always better to add more non-dedicates to a work environment or anything, but I think there are considerations in both directions.) (Views my own, not my emplo

I'm going to add some of this to my 'done' column, thanks for pointing it out.

Hi Yonatan, I actually got some 1:1 career advice from 80k recently, they were great! I'm also friends with someone in AI who's local to Montréal and who's trying to help me out. He works at MILA which has ties to a few universities in the city (that's kind of what inspired the speculative master's application). Thanks in advance for the referrals! 

Now, I've always been very sceptical of these arguments because they seem to rely on nothing but intuition and go against historical precendent

What historical precedent do you have in mind here? The reason my intuitions initially would go in the opposite direction is a case study like invasive species in Australia

tl;dr is when an ecosystem has evolved holding certain conditions constant (in this case geographical isolation), and that changes fairly rapidly, even a tiny change like a European rabbit can have negative consequences well beyond what was... (read more)

Thanks for your suggestions! Some answers:

1. Robust decision making. And yes, pretty much, I was thinking of the interpretations covered here: https://plato.stanford.edu/entries/probability-interpret.

2. I think formalizing this properly would be part of the task, but if we take the Impact, Neglectedness, Tractability framework, I'm roughly thinking of a decision-making framework that boosts the weight given to impact and lowers the weight given to tractability.

3. I was roughly thinking of an analysis of the approach used by exceptional participants in fore... (read more)

Yes, this would also be useful, and thank you for the link!