All of Ozzie Gooen's Comments + Replies

Is it no longer hard to get a direct work job?

I think this is a major factor. From what I can tell, some people have very easy times getting EA jobs, and some have very hard times getting EA jobs. This in itself really isn't much information; we'd really need many stats to get a better sense of things.

For what it's worth, I wouldn't read this as, "the people who have a hard time... are just bad candidates". It's more that EA needs some pretty specific things, and there are some sorts of people for who it's been very difficult to find a position; even though some of these people are quite brilliant in many ways.

Should I go straight into EA community-building after graduation or do software engineering first?

but a piece of advice that I've heard is that career capital for these things is only useful for getting your foot in the door (e.g. getting a coding test) and then your actually performance (rather than your resume) is what ends up getting you/not getting you the job.

In my experience, most orgs are much more excited about bringing on "senior engineers" than "junior engineers", and often the only way to get the performance to be a "senior engineer" is to work in a company to build those skills. It doesn't have to take long though.

Should I go straight into EA community-building after graduation or do software engineering first?

I think I agree with Jack on the AI side.

If you can get a job at Anthropic, Redwood, maybe OpenAI or Deepmind or other AI companies, those might be better than most software startups.

The first 1-2 years of software engineering work can be amazing for the experience. It really can depend on what team you work with though. Some have great standards and help train junior engineers. Others don't, and won't teach decent practices. My intuition is that the startup you're considering is likely good (startups with lots of respect in tech circles, like Stripe, can ... (read more)

Pathways to impact for forecasting and evaluation

I think the QURI one is a good pass, though if I were to make it, I'd change a few details of course.

Pathways to impact for forecasting and evaluation

I looked over an earlier version of this, just wanted to post my takes publicly.[1]

I like making diagrams of impact, and these seem like the right things to model. Going through them, many of the pieces seem generally right to me. I agree with many of the details, and I think this process was useful for getting us (QURI, which is just the two of us now) on the same page.

At the same time though, I think it's surprisingly difficult to make these diagrams to be understandable for many people. 

Things get messy quickly. The alternatives are to make them mu... (read more)

Opportunity Costs of Technical Talent: Intuition and (Simple) Implications

Thanks so much, that's really useful to know (it's really hard to tell if these metaphors are useful at all), and also makes me feel much better about this. :) 

Opportunity Costs of Technical Talent: Intuition and (Simple) Implications

Yea; lightcone  is much closer to any other group I knew of before. I was really surprised by their announcement. 

I think it's highly unusual (this seems much higher than the other and previous non-ai eng roles I knew of). 

I'd also be very surprised if Lightcone chose someone for $400,000 or more. My guess is that they'll be aiming for the sorts of people who aren't quite that expensive. 

So, I think Lightcone is making a reasonable move here, but it's an unusual move. Also, if we thought that future ea/engineering projects would have to pay $200k-500k per engineer, I think that would change the way we think about them a lot.

Database of orgs relevant to longtermist/x-risk work

Yea, I was briefly familiar. 

I think it's still tough, and agree with Ben's comment here. 
https://forum.effectivealtruism.org/posts/kQ2kwpSkTwekyypKu/part-1-ea-tech-work-is-inefficiently-allocated-and-bad-for?commentId=ypo3SzDMPGkhF3GfP

But I think consultancy engineers could be a fit for maybe ~20-40% of EA software talent. 

Database of orgs relevant to longtermist/x-risk work

Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)

It really depends on how sophisticated the work is and how tied it is to existing systems.

For example, if you wanted to build tooling that would be useful to Google, it would probably be easiest just to start a job at Google, where you can see everything and get used to the codebases, than to try to become a consultant for Google, where you'd ask for very narrow tasks that don't require you to be part of their confidential workflows and similar.

1Yonatan Cale10dI agree I won't get everything Still, I don't think Google is a good example. It is full of developers who have a culture of automating things and even free time every week to do side projects. This is really extreme. A better example would be some organization that has 0 developers. If you ask someone in such an organization if there's anything they want to automate, or some repetitive task they're doing a lot, or an idea for an app (which is probably terrible but will indicate an underlying need) - things come up
Database of orgs relevant to longtermist/x-risk work

I could see a space for software consultancies that work with EA orgs, that basically help build and maintain software for them. 

I'm not sure what you mean by SaaS in this case. If you only have 2-10 clients, it's sort of weird to have a standard SaaS business model. I was imagining more of the regular consultancy payment structure.

1Yonatan Cale11dEA Software Consultancy: In case you don't know these posts: Part 1 [https://forum.effectivealtruism.org/posts/kQ2kwpSkTwekyypKu/part-1-ea-tech-work-is-inefficiently-allocated-and-bad-for] Part 2 [https://forum.effectivealtruism.org/posts/eBdPsvtaXCYbbH5kM/] Part 3 [https://forum.effectivealtruism.org/posts/4d7NsFjwn4Nf6XcC9/]
Database of orgs relevant to longtermist/x-risk work

I'll note:

  1. When you say "paid", do you mean full-time? I've found that "part-time" people often drop off very quickly. Full-time people would be the domain of 80,000 Hours, so I'd suggest working with them on this.
  2. "no place for orgs to surface such needs beyond posting a job" -> This is complicated. I think that software consultancy models could be neat, and of course, full-time software engineering jobs do happen. Both are a lot of work. I'm much less excited about volunteer-type arrangements, outside of being used to effectively help filter candidat
... (read more)
1Yonatan Cale11d1. Developers who'd like to do EA work: Not only full time 2. I'm talking about discovering needs here. I'm not talking at all about how the needs would be solved Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)
1Guy Raveh11dJust throwing a thought: if many EA orgs have software needs and are struggling to employ people who'll solve them; and on the other hand, part-time employees or volunteer directories don't help that much - would it make sense to start a SaaS org aimed at helping EA orgs?
Even More Ambitious Altruistic Tech Efforts

this post is not an attack on you or on your position

Thanks! I didn't mean to say it was, just was clarifying my position.

An EA VC which funds projects based mostly on expected impact might be a good idea to consider

Now that I think about it, the situation might be further along than you might expect. I think I've heard about small "EA-adjacent" VCs starting in the last few years.[1] There are definitely socially-good-focused VCs out there, like 50 Year VC.

Anthropic was recently funded for $124 Million as the first round. Dustin Moskovitz, Jaan Tall... (read more)

2ShayBenMoshe13dThat's great, thanks! I was aware of Anthropic, but not of the figures behind it. Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names. Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

I agree with (basically) all of this. I've been looking more into enterprise tools for QURI and have occasionally used some. As EA grows, enterprise tools make more sense for us.

I guess this seemed to me like a different topic, but I probably should have flagged this somewhere in this post.

On Guesstimate in particular, I'm very happy for other groups to use different tools (like Analytica, Causal and probabilistic programming languages. Normally when I talk to people about this, I wind up recommending other options. All that said, I think there are some ar... (read more)

Even More Ambitious Altruistic Tech Efforts

I'm really happy to see this posted and to see more discussion on the topic.

However, I strongly disagree with him on what kinds of projects we should focus on. Most of his examples are of tools or infrastructure for other efforts. I think that as a community we should be even more ambitious - I think we should try to execute multiple tech-oriented R&D projects (not necessarily software-oriented) that can potentially have an unusually large direct impact.

This is a good point. My post had a specific frame in mind, of "Tech either for EAs or funded mo... (read more)

3ShayBenMoshe13dThanks for clarifying Ozzie! (Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn't converge to a single point). With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples that come to mind are the research units of the HMOs in Israel, tech benefitting people in the developing world [e.g. Sella's teams at Google], basic research enabling applications later [e.g. research on mental health]). An EA VC which funds projects based mostly on expected impact might be a good idea to consider!
What Are Your Software Needs?

Sigh... sorry;

This is a question post, but it's more specific than my post. It's asking groups what their needs are, which will result in different answers than the sorts of ideas I provided.

The ideas I gave weren't ones that were explicitly asked for. They were instead ones I've noticed, and, having spent a while investigate, think they would be good bets. Many are more technical/abstract than I'd expect people would understand, especially when thinking "what are my software needs"

In my experience, this is one nice way of coming up with ideas, but it's de... (read more)

1Yonatan Cale13dI agree, this is only an attempt to surface a subset of needs that (I'm guessing) don't currently have a good way to surface.
What Are Your Software Needs?

Maybe it could be it's own post? Like, we write a Question post, and write all of the options as answers. We could do that after this one is live for a few days, and include the top ideas in it.

3Yonatan Cale13dI don't think I understood: 1. This is already a question post (thanks to Nathan for suggesting it) 2. Do you want to pick what to work on based on upvotes? I don't think I'd do it that way (or maybe I didn't understand you?)
2Nathan Young13dWhy not just put them here and allow a straight comparison? I prefer one list to two. Unless you dislike the framing of this question?
What Are Your Software Needs?

I'm happy with others doing it, but it's a whole lot of ideas, so it feels to me like it would get messy. Maybe there's some way to use a more formal survey or identify some other software solution.

I also would very much want others to suggest ideas. (Like in this post!) I wasn't trying to make any sort of definitive list, just a generative one.

2Ozzie Gooen13dMaybe it could be it's own post? Like, we write a Question post, and write all of the options as answers. We could do that after this one is live for a few days, and include the top ideas in it.
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

I think it would be cool for someone to create an "engineering agenda" that entrepreneurial software developers could take ideas from and start working on, analogous

I think my hunch is that this is almost like asking for an “entrepreneur agenda”. There are really a ton of options.

I’m happy to see people list all the ideas they can come up with.

I imagine “agendas” would be easier to rigorously organize if you limit yourself to a more specific area. (So, I’d want to see many “research agendas” :) )

Possibly you are planning this for later posts in this s

... (read more)
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

Noted!

these projects generally seem to fail not because of software engineering but because of some non-technical thing

Agreed, though this seems mainly for getting them off the ground (making sure you find an important problem). Software startups also have this problem; and there's a lot of discussion & best practices about the right kinds of people & teams for software startups.

2Ben_West14dAgreed. I think it would be cool for someone to create an "engineering agenda" that entrepreneurial software developers could take ideas from and start working on, analogous to e.g. this post [https://forum.effectivealtruism.org/posts/MsNpJBzv5YhdfNHc9/a-central-directory-for-open-research-questions] from Michael. (I think this would be one level of detail more specific than your project ideas listed in OP. E.g. instead of "better data management" it's something like "organization X wants data Y displayed in way Z." Possibly you are planning this for later posts in this sequence already?)
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

Yea, I looked into it a bit. I'd imagine that an EA project here would begin by evaluating Mastodon more and seeing if we could either:

  • Use it as is
  • Fork it
  • Contribute to the codebase
  • Sponsor development to include features we particularly want

I would love to see it take off, for multiple reasons.

Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

Good point. I don't have much Hacker News karma or Reddit status, but if anyone reading this does, that would be appreciated.

Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

My impression is that there are more concrete software engineering projects in bio security and farmed animal welfare than in AI safety or EA meta; possibly it's worth including some of those in your document.

I could easily see this being the case, I'm just not in these fields, so have far fewer ideas. If anyone else reading this has any, do post!

Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

I don't see a ton of projects where the most immediate bottleneck is software engineering. In most of the project you list, it seems like there's substantial product management work needed before software engineering becomes the bottleneck

It seems like we might just be disagreeing on terminology? 

My main interest is in advancing software projects (I called them "Engineering Efforts", but this encompasses many skills), but I care much less about the specific order in which people are hired. 

That said, I don't feel like I really understand you. It'... (read more)

2Ben_West14dCool, I'm not sure we really disagree – the thing I want to flag is that these projects generally seem to fail not because of software engineering but because of some non-technical thing (e.g. they are not actually solving an important problem).
Improve delegation abilities today, delegate heavily tomorrow

For what it's worth, I think the best reason not to delegate is something like:
"Funding work is hard, and funders have limited time. If you can do some funding work yourself, that could basically contribute to the amount of funding work beign done." (This works in some cases, not for others)

> I'm nervous about a fund finding a new opportunity and suddenly leaving a charity with a large funding gap, crippling a very good charity.

I think that funding work is a lot more work than just making yearly yes/no statements that are in-isolation ideal. There's mor... (read more)

Improve delegation abilities today, delegate heavily tomorrow

Noted, thanks!

I was trying to explain a framework; the resulting strategy is something like:
1. Improve delegation abilities
2. Delegate more in the future, accordingly.

You're totally right I didn't get into how to do this. 

Do you (or others reading this) have ideas for what the post should have been called? It already was sort of long, I wasn't sure about a good tradeoff (I tried a few variations, and didn't really like any of them)

It's too late to change now, but I can try to do better in the future

Simple comparison polling to create utility functions

I just wanted to give my take on some of this:

  • The web app is neat to experiment with the ideas and help us build intuitions.
  • That said, I think the key ideas (not the web app in particular), are the main insight here.
  • The current implementation is a solid first step, but I think we’re still a ways from having something that’s fun to use. My guess is that it will require some sophisticated UX / UI work to do a job that’s good enough for this to be useful in production. (If anyone reading this wants to try, let one of us know!)
  • I also think it’s important
... (read more)
Improve delegation abilities today, delegate heavily tomorrow

Thanks!

You probably want to put more effort into making the suggested action easy and compelling if you want to get people to do something.

I'm interested in arguing/discussion for buy-in that our community should strive to eventually have strong, trustworthy, high-delegation groups. I'm not sure amenable this is with straightforward actions right now.

Like with much of my more theoretical writing, I see it as a few steps before clear actions.

Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

Personally, I don't think this deserves that much discussion time. It's literally one word.

All that said, I'd note that I couldn't at all tell that it was humorous. The problem is that I just don't feel like I can model authors that well. I know that many,  particularly junior ones, do make such titles genuinely (not jokingly), so I just assumed it was that way.

It really, really, sucks, but I think public writing generally can't be subtle/clever in many ways we're used to with friends and colleagues.  Our friends would pick up on things like this... (read more)

Improve delegation abilities today, delegate heavily tomorrow

I think I agree with everything there; but I'm unsure what you mean exactly when you say "opposite direction"

I was arguing that our abilities for delegation should improve. If we had better accountability/transparency/reasoning-at-what-level-is-optimal, that would help improve our abilities to do delegation well.

I wasn't arguing that we delegate more right now; but rather, that we should work on improving our abilities to delegate well, with the intention of delegating more later.
 
(I changed the title to try to make this more clear)

3tamgent17dI found the change in title confusing as there wasn't really any discussion of how to actually improve our delegation abilities in the post, and more just encouragement to delegate more. You mentioned some ways in this comment (accountability, transparency...) but they're not really unpacked in the main post. Would be interested in a discussion unpacking these and other ways.
4Davidmanheim22dYeah, I wasn't really disagreeing, just pointing out that the way to get there seems to be to improve proof-of-alignment - which is not really accountability, since that's used to talk about not cheating rather than "trying hard to do what they want"
Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

I think I agree with like 80% of this. But I think it should be flagged more that when many people try "engaging writing", they do end up with stuff that's really bad.

For example the Copyblogger website seems full of encouraging classic clickbait headlines, like:

"Here’s why Netflix streaming quality has nosedived over the past few months"
"12 Of The Most Stunning Asian Landscapes. The Last One Blew Me Away."

I don't want to see stuff like that on the EA Forum. 

Similarly, I found the title of this post hyperbolic (you also call attention to this, but sev... (read more)

7Kat Woods17dI've updated my title based on this feedback and others' reactions. You can read more here [https://forum.effectivealtruism.org/posts/dAbs7w4J4iNm89DjP/?commentId=a8Sub9wYrJpDkduEH]

I likewise mostly agreed with+ appreciated the post, while also agreeing with Ozzie's caveat/pushback.

One additional counterpoint to this post that I'd add is "But engagingness is a symmetric weapon!" (I don't think that means we should avoid engagingness, but it feels worth noting.) To explain via a long Slate Star Codex quote:

Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys. In ideal conditions (which

... (read more)
6Kat Woods21dI agree about clickbaity titles. I think CopyBlogger should be selectively applied. A lot of their advice isn't what I would promote. However, I do think overall most of their advice is quite good, such as spending a lot of time on headlines / first sentences, starting with why people should be interested, not burying the lead, etc.
Improve delegation abilities today, delegate heavily tomorrow

Thanks agree that corruption is a big problem for society at large. At the same time though, with some work, we can make sure that groups are not very corrupt. My intuition is that a great deal of competitive markets have very low corruption; I’d expect that Amazon runs pretty effectively, for instance. I think we can aim for similar levels in our charity delegation structures. It will take some monitoring/ evaluation/transparency, but it definitely seems doable.

My impression is that many groups that complain about corruption actually do fairly little to a... (read more)

2Khorton20dI also donate directly to charities I choose, looking at recommendations from GiveWell, rather than delegating to EA Funds / GiveWell. Reasons for delegating: -better coordination -they might have better/more up-to-date empirical information about the kinds of charities that match my values Reasons for not delegating: -money gets to recipient charity faster (monthly, not quarterly) -less bank fees -funds almost certainly won't match my values exactly -I can double-check their work (I still think some of the deworming assumptions are absolutely ludicrous) Ambiguous: -I'm nervous about a fund finding a new opportunity and suddenly leaving a charity with a large funding gap, crippling a very good charity. Ideally this would be solved by the fund phasing over to the new opportunity slowly. In practice it can also be solved by individual donors taking a long time to move to new recommendations (or not moving).
Disagreeables and Assessors: Two Intellectual Archetypes

I'd note that I expect these clusters (and I suspect they're clusters) to be a minority of intellectuals. They stand out a fair bit to me, but they're unusual. 

I agree Bryan Caplan leans disagreeable, but is less intense than others. I found The Case Against Education and some of his other work purposefully edgy, which is disagreeable-type-stuff, but at the same time, I found his interviews to often be more reasonable. 

I would definitely see the "disagreeable" and "assessor" archetypes as a spectrum, and also think one person can have the perks of both.

Disagreeables and Assessors: Two Intellectual Archetypes

There seems to be a fine line between actually useful models of this kind which have some predictive power (or at least allow thoughts to be a bit tidier), and those that are merely peculiarly entertaining, like Myers-Briggs. And I find it hard to tell from the outside on which side of that line any given model falls. 

I have mixed feelings here. I think I'm more sympathetic to Myers-Briggs when used correctly, than other people. There definitely seems to be some signal that it categorizes (some professions are highly biased towards a narrow part of th... (read more)

CEA grew a lot in the past year

I'm quite happy to see the progress here. Kudos to everyone at CEA to have been able to scale it, without major problems yet (that we know of). I think I've been pretty impressed by the growth of the community; intuitively I haven't noticed a big drop in average quality, which is obviously the thing to worry about with substantial community growth.

As I previously discussed in some related comment threads, CEA (and other EA organizations in general) scaling, seems quite positive to me. I prefer this to trying to get tons of tiny orgs, in large part because ... (read more)

Thanks! Some comments:

  • Yeah, I agree 2x is quite a lot! We grew more this year because I think we were catching up with demand for our projects. I expect more like 50% in the future.
  • Is there a strong management culture? I think there is: I've managed this set of managers for a long while, and we regularly meet to discuss management conundrums, so I think there's a shared culture. We also have shared values, and team retreats to sync up together. But each manager also has their own take, and I think that is leading to different approaches to e.g. project man
... (read more)
Collective intelligence as infrastructure for reducing broad existential risks

Thanks so much for the summary, I just noticed this for some reason.

I'll keep an eye out.

It sounds a bit like CI is fairly scattered, doesn't have all too much existing work, and also isn't advancing particularly quickly as of now. (A journal sounds good, but there are lots of fairly boring journals, so I don't know what to make of this)

Maybe 1-5 years from now, of whenever there gets to be a good amount of literature that would excite EAs, there could be follow-up posts summarizing the work.

Disagreeables and Assessors: Two Intellectual Archetypes

Thanks for the comment (this could be it's own post). This is a lot to get through, so I'll comment on some aspects.

I have disagreeable tendencies, working on it but biased

I have some too! I think there are times when I'm fairly sure my intuitions lean overconfident in a research project (due to selection effects, at least), but it doesn't seem worth debiasing, because I'm going to be doing it for a while no matter what, and not writing about its prioritization. I feel like I'm not a great example of a disagreeable or an assessor, but I sometimes can lean ... (read more)

Disagreeables and Assessors: Two Intellectual Archetypes

Good find, I didn't see that discussion before. 

For those curious; Scott makes the point that it's good to separate "idea generation" from "vetted ideas that aren't wrong"; and that's it's valuable to have spaces where people can suggest ideas without needing them to be right. I agree a lot with this.

I have this model where in a healthy society, there can be contexts where people generate all sorts of false beliefs, but also sometimes generate gold (e.g. new ontologies that can vastly improve the collective map). If this context is generating a suffic

... (read more)
Disagreeables and Assessors: Two Intellectual Archetypes

I think that Jobs, later on (after he re-joined Apple), was just a great manager. This meant he considered a whole lot of decisions and arguments, and generally made smart decisions upon reflection.

I think he (and other CEOs) are wildly inaccurate with how they portray themselves to the public. However, I think they can have great decision making in company-internal decisions. It's a weird, advantageous, inconsistency. 

This book goes into some detail:
https://www.amazon.com/Becoming-Steve-Jobs-Evolution-Visionary-ebook/dp/B00N6PCWY8/ref=sr_1_3?keywords=steve+jobs&qid=1636131865&rnid=2941120011&s=books&sr=1-3

Disagreeables and Assessors: Two Intellectual Archetypes

I like that naming setup. I considered using the word "evaluators", but decided against it because I've personally been using "evaluator" to mean something a bit distinct. 

Disagreeables and Assessors: Two Intellectual Archetypes

This clustering is based on anecdotal data; I wouldn't be too surprised if it were wrong. I'd be extremely curious for someone to do a cluster analysis here and see if there are any real clusters here.

I feel like I've noticed a distinct cluster of generators who are disagreeable, and have a hard time thinking of many who are agreeable. Maybe you could give some examples that come to mind to you? Anders Sandberg comes to my mind, and maybe some futurists and religious people. 

My hunch is that few top intellectuals (that I respect) would score in the 70... (read more)

1Evan R. Murphy22dAlbert Einstein also comes to mind as an agreeable generator. I haven't read his biography or anything, but based on the collage of stories I've heard about him, he never seemed like a very disagreeable person but obviously generated important new ideas.
1Evan R. Murphy22dDr. Greger from NutritionFacts.org also seems like an agreeable generator. Actually he may be disagreeable in that he's not shy about pointing out flaws in studies and others' conceptions, but he does it in an enthusiastic, silly and not particularly abrasive way. It's interesting that some people may still disagree often but not be doing it in a disagreeable manner.
4willbradshaw1moAt the very least I think we can be more confident in the generators/evaluators (or /assessors) dichotomy, than in the further claim that the former tend to be disagreeable. I'm coming at this from science, where lot of top generators have a strong "this is so cool!" sort of vibe to them – they have a thousand ideas and can't wait to try them out. Don't get me wrong, I think disagreeable generators play an important role in science too, but it's not my go-to image of a generator in that space. [Wild speculation] It's plausible to me that this varies by field, based on the degree to which that field tends to strike out into new frontiers of knowledge vs generate new theories for things that are already well-studied. In the latter case, in order for new ideas to be useful, the previous work on the topic needs to be wrong in some way – and if the people who did the previous work are still around they'll probably want to fight you. So if you want to propose really new ideas in those sorts of fields you'll need to get into fights – and so generators in these fields will be disproportionately disagreeable. Whereas if everyone agrees that there are oodles of things in the field that are criminally understudied, you can potentially get quite a long way as a generator before you need to start knocking down other people's work. Obviously if this theory I just made up has any validity, it will be more of a spectrum than a binary. But this sort of dynamic might be at play here.
7Simon_Grimm1moSpencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile [https://us7.campaign-archive.com/?e=4c655d9232&u=9b65e8f8f700bd2ce8ffb9131&id=60b7e7d767] . I'd consider him a generator.
Prioritization Research for Advancing Wisdom and Intelligence

I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.

All that said, the risks of ignoring the area also seem substantial.

The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.

In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.

Prioritization Research for Advancing Wisdom and Intelligence

One adjacent category which I think is helpful to consider explicitly (I think you have it implicit here) is 'well-informedness', which I motion is distinct from 'intelligence' or 'wisdom'.

That’s an interesting take.

When I was thinking about “wisdom”, I was assuming it would include the useful parts of “well-informedness”, or maybe, “knowledge”. I considered using other terms, like “wisdom and intelligence and knowledge”, but that got to be a bit much.

I agree it’s still useful to flag that such narrow notions as “well informedness” are useful.

Prioritization Research for Advancing Wisdom and Intelligence

My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.

I think I agree, though I can’t tell how much funding you have in mind.

Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.

Prioritization Research for Advancing Wisdom and Intelligence

EAs have less of an advantage in this domain.

I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.

My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).

4Charles He1moOk, this makes a lot of sense and I did not have this framing. Low quality/low effort comment: For clarity one way of doing this is how Open Phil makes grants: well defined cause areas with good governance that hires extremely high quality program officers with deep models/research who make high EV investments. The outcome of this, weighted by dollar, has relatively few grants go to orgs “native to EA". I don’t think you have mimic the above, this even be counterproductive and impractical. The reason my mind went to a different model of funding was related to my impression/instinct/lizard brain when I saw your post. Part of the impression went like: There’s a “very-online” feel to many of these interventions. For example, “Pre-AGI” and “Data infrastructure”. "Pre-AGI". So, like, you mean machine learning, like Google or someone’s side hustle? This boils down to computers in general, since the median computer today uses data and can run ML trivially. When someone suggests neglected areas, but 1) it turns out to be a buzzy field, 2) there seems to be tortured phrases [https://www.nature.com/articles/d41586-021-02134-0] and 3) association of money, I guess that something dumb or underhanded is going on. Like the grant maker is going to look for “pre-AGI” projects, walk past every mainstream machine learning or extant AI safety project, and then fund some curious project in the corner. 10 months later, we’ll get an EA forum post “Why I’m concerned about Giving Wisdom”. The above story contains (several) slurs and is not really what I believed. I think it gives some texture to what some people might think when they see very exciting/trendy fields + money, and why careful attention to founder effects and aesthetics is important. I'm not sure this is anything new and I guess that you thought about this already.
Prioritization Research for Advancing Wisdom and Intelligence

It's not clear anyone should care about my opinion in "Wisdom and Intelligence"

I just want to flag that I very much appreciate comments, as long as they don’t use dark arts or aggressive techniques.

Even if you aren’t an expert here, your questions can act as valuable data as to what others care about and think. Gauging the audience, so to speak.

At this point I feel like I have a very uncertain stance on what people think about this topic. Comments help here a whole lot.

Prioritization Research for Advancing Wisdom and Intelligence

Less directly, I think caution is good for other interventions, e.g. "Epistemic Security", "Cognitive bias research", "Research management and research environments (for example, understanding what made Bell Labs work)".

I'd also agree that caution is good for many of the listed interventions. To me, that seems to be even more of a case for more prioritization-style research though, which is the main thing I'm arguing for.

4Charles He1moHonestly, I think my comment is just focused on "quality control" and preventing harm. Based on your comments, I think it is possible that I am completely aligned with you.
Prioritization Research for Advancing Wisdom and Intelligence

I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now. 

I'm also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.

I did think about "recruiting" as a wisdom/intelligence intervention. This seems more sensitive to the definition of "wisdom/intelligence" than other things, so I left it out here.

I'm not sure ho... (read more)

2Charles He1moNo, I am not that extreme. It's not clear anyone should care about my opinion in "Wisdom and Intelligence" but I guess this is it: * From this list, it seems like there's a set of interventions that EA have an advantage in. This probably includes "Software/Hardware", e.g. promising AI/computer technologies. Also these domains have somewhat tangible outputs and can accept weirder cultural/social dynamics. This seems like a great place to be open and weird. * Liberalism, culture and virtue is also really important and should be developed. It also seems good to be ambitious or weird here, but EAs have less of an advantage in this domain. Also, I am worried about the possibility of accidentally canonizing or creating a place where marginal ideas (e.g. reinventions of psychology) are constantly emitted. This will drive out serious people and can bleed into other areas. It seems like the risks can be addressed by careful attention to founder effects. I am guessing you thought about this. My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.
Prioritization Research for Advancing Wisdom and Intelligence

This tension is one reason why I called this "wisdom and intelligence", and tried to focus on that of "humanity", as opposed to just "intelligence", and in particular, 'individual intelligence". 

I think that "the wisdom and intelligence of humanity" is much safer to optimize than "the intelligence of a bunch of individuals in isolation". 

If it were the case that "people all know what to do, they just won't do it", then I would agree that wisdom and intelligence aren't that important. However, I think these cases are highly unusual. From what I've seen, in most cases of "big coordination problems", there are considerable amounts of confusion, deception, and stupidity. 

Prioritization Research for Advancing Wisdom and Intelligence

Thanks for the link, I wasn't familiar with them. 

For one, I'm happy for people to have a very low bar to post links to things that might or might not be relevant. 

Load More