All of Ozzie Gooen's Comments + Replies

RyanCarey's Shortform

Interesting take, quick notes:
1) I worked on a similar model with Justin Shovelain a few years back. See: https://www.lesswrong.com/posts/BfKQGYJBwdHfik4Kd/fai-research-constraints-and-agi-side-effects

2) Rather, one's impact is positive if the ratio of safety and capabilities contributions  is greater than the average of the rest of the world.

I haven't quite followed your model, but this doesn't see exactly correct to me. For example, if the mean player is essentially "causing a lot of net-harm", then "just causing a bit of net-harm", clearl... (read more)

Project: A web platform for crowdsourcing impact estimates of interventions.

Happy to see more discussion on these topics.

Much of this is a part of what both some of the EA forecasting community, and what we at [QURI](https://quantifieduncertainty.org/), are working on.

I think the full thing is much more work than you think it is. I suggest trying to take one subpart of this problem and doing it very well, instead of taking the entire thing on at once. 

2Max Clarke1mo
I've been thinking about which sub-parts to tackle, but I think that the project just isn't very valuable until it has all three of: * A Prediction / estimation aggregation tool * Up-to-date causal models (using a simplified probabilistic programming language) * Very good UX, needed for adoption. It's a lot of work, yes, but that doesn't mean it can't happen. I'm not sure there's a better way to split it up and still have it be valuable. I think the MVP for this project is a pretty high bar. Ways to split it up: * Do the probabilistic programming language first. This isn't really valuable, it's a research project that no one will use. * Do the prediction aggregation part first. This is metaculus. * Do the knowledge graph part first. This is maybe a good start - it's a wiki with better UX? I'm sure someone is scoping this out / doing it. These things empower each other. It's hard, but nevertheless I'd estimate definitely no more than 3 person-years of effort for the following things: * A snappy, good-looking prediction/estimation (web) interface. * A causal model editor with a graph view. * A backend that can update the distributions with monte-carlo simulations. * Rich-text comments and posts attached to models, bets and "markets" (still need a better name than "markets") * I-frames for people to embed the UI elsewhere. What do you estimate?
1Max Clarke1mo
I'd love to make an aggregate estimate for how much work this project would take
Yonatan Cale's Shortform

Here's my super quick take, if I were evaluating this for funding:

Startups are pretty competitive. For me to put money into a business venture, I'd want quite a bit of faith that the team is very strong. This would be pretty high bar. 

From looking at this, it's not clear to me promising the team is at this point.

Generally, the bar for many sorts of projects is fairly high.

1Yonatan Cale2mo
Ok, for the record this is very far from my guess. The closest thing I said was "Intra Fund don't know how to evaluate startups, and specifically market places"
Yonatan Cale's Shortform

A few quick things:
- I agree that many grantmakers don't have enough time to give much feedback, and that this leads to suboptimal outcomes.
- I think it's pretty difficult for people outside these organizations to help much with what are basically internal processes. People outside have very little context, so I would expect them to have a tough time suggesting many ideas.
- In this specific proposal, I think it would be tricky for it to help much. A lot of what I've seen (which isn't all too much) around grant applications is about people sharing the negat... (read more)

2Vincent van der Holst2mo
I'm the person Yonatan is referring to. His feedback and your general feedback are very helpful, so thank you for that! I have been a lurker within EA for years and will write more content on the EA forum, including requesting feedback on the idea (soon). Hopefully that will help, although I don't know because I didn't get feedback. Before I move into why I think grant makers should provide short feedback I want to be clear: I'm completely comfortable with being rejected and I completely understand that grant makers are very busy. Having said that, I think grant makers should feedback the applications they reject. It doesn't have to be more than 1-2 lines and one minute to write. I have applied to EA 6 months ago and got rejected and applied again last month and got rejected again. I had a lot of encouraging talks with EA's (although criticism as well) and was more convinced this was going to get funding. I have no idea if they hated the idea and they think it will never work, or if they think it doesn't fit them, they are not able to evaluate properly, etc. The potential impact of knowing why is very large. It might help me improve the idea, maximize the impact or pursue other paths that are more impactful and effective. I think that one minute feedback has a high expected value. Knowing why will also help me decide whether to reapply or not, either saving the grant makers future time if I don't or improving the idea so it has more impact if I do. Feedback might help EA get less reapplications of higher quality, increasing overall impact and reducing the time to review. Win-win?
Toward Impact Markets

This looks really interesting, will take me some time to get through.
Very minor thing: Many of the links (the titles) are broken.

5Denis Drescher2mo
Thanks! I think I’ve managed to remove all the broken links!
The Future Fund’s Project Ideas Competition

I think this is neat. 

Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."

I of course see this as basically a bunch of estimation functions, but you get the idea.

The Future Fund’s Project Ideas Competition

Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name. 

https://en.wikipedia.org/wiki/Never_again 

2Peter Wildeford3mo
Sorry - I was not aware of this
Splitting the timeline as an extinction risk intervention

One quick thought; often, when things are very grim, you're pretty okay taking chances.

Imagine we need 500 units of AI progress in order to save the world. In expectation, we'd expect 100. Increasing our amount to 200 doesn't help us, all that matters is if we can get over 500. In this case, we might want a lot of bifurcation. We'd much prefer a 1% chance of 501 units, than a 100% chance of 409 units, for example.

In this case, lots of randomness/bifurcation will increase total expected value (which is correlated with our chances of getting over 500 units, ... (read more)

Splitting the timeline as an extinction risk intervention

It seems to me like quantum randomness can be a source of creating legitimate divergence of outcomes. Lets call this "bifurcation". I could imagine some utility functions for which increasing the bifurcation of outcomes is beneficial. I have a harder time imagining situations where it's negative. 

I'd expect that interventions that cause more quantum bifurcation generally have other costs. Like, if I add some randomness to a decision, the decision quality is likely to decrease a bit on average.

So there's a question of the trade-offs of a decrease in th... (read more)

4Ozzie Gooen4mo
One quick thought; often, when things are very grim, you're pretty okay taking chances. Imagine we need 500 units of AI progress in order to save the world. In expectation, we'd expect 100. Increasing our amount to 200 doesn't help us, all that matters is if we can get over 500. In this case, we might want a lot of bifurcation. We'd much prefer a 1% chance of 501 units, than a 100% chance of 409 units, for example. In this case, lots of randomness/bifurcation will increase total expected value (which is correlated with our chances of getting over 500 units, moreso than it is correlated with the expected units of progress). I imagine this mainly works with discontinuities, like the function described above (Utility = 0 for units 0 to 499, and Utility = 1 for units of 500+)
Splitting the timeline as an extinction risk intervention

My impression is that you're arguing that quantum randomness creates very large differences between branches. However, couldn't it still be the case that even more differences would be preferable? I'm not sure how much that first argument would impact the expected value of trying to create even more divergences. 

Forecasting Newsletter: Looking back at 2021.

So the incentives are not pointing in the right direction. Capable forecasters can earn significantly more by predicting societally-useless sports stuff, or simply by arbitraging between the big European sports-houses and crypto markets. Meanwhile, the people who remain forecasting socially useful stuff on Metaculus, like whether Russia will invade the Ukraine or whether there will be any new nuclear explosions in wartime, do so to a large extent out of the goodness of their heart.

I think that the clear solution to this is to either increase the overall wi

... (read more)
"Should have been hired" Prizes

This is an interesting idea, thanks for raising it!

I think intuitively, it worries me. As someone around hiring in these sorts of areas, I'm fairly nervous around the liabilities that come from hiring, and this seems like it could increase these. (Legal, and just upsetting people).

I'm imagining:

  • There's a person who thinks they're great, but the hiring manager really doesn't see it. They get rejected.
  • They decide to work on it anyway, saying they'll get the money later.
  • They continue to email the org about their recent results, hoping to get feedback, sort of
... (read more)
Long-Term Future Fund: July 2021 grant recommendations

Quick thoughts of possible improvements to the format:

1) Make both the start time and the end time clear
2) Include a link to the person's website/linkedin. Right now I just search each person and choose whatever is on top of Google, anyway. Much of the grants really depend on the specific person, so linking to more information about the person would be valuable. (I realize this might be a bit annoying in terms of getting their buy-in/information)

2Jonas Vollmer4mo
Noted, thanks!
Long-Term Future Fund: July 2021 grant recommendations

Just to clarify:

That last report was for May 2021.

So does this report mainly cover grants for June and July, only? 

Confusingly, the report called "May 2021" was for grants we made through March and early April of 2021, so this report includes most of April, May, June, and July.

I think we're going to standardize now so that reports refer to the months they cover, rather than the month they're released.

Quick thoughts of possible improvements to the format:

1) Make both the start time and the end time clear
2) Include a link to the person's website/linkedin. Right now I just search each person and choose whatever is on top of Google, anyway. Much of the grants really depend on the specific person, so linking to more information about the person would be valuable. (I realize this might be a bit annoying in terms of getting their buy-in/information)

elifland's Shortform

This post is relevant: https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental

elifland's Shortform

The health interventions seem very different to me than the productivity interventions.

The health interventions have issues with long time-scales, which productivity interventions don't have as much.

However, productivity interventions have major challenges with generality. When I've looked into studies around productivity interventions, often they're done in highly constrained environments, or environments very different from mine, and I have very little clue what to really make of them. If the results are highly promising, I'm particularly skeptical, so i... (read more)

3Ozzie Gooen4mo
This post is relevant: https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental
3elifland4mo
This all makes sense to me overall. I'm still excited about this idea (slightly less so than before) but I think/agree there should be careful considerations on which interventions make the most sense to test. A few things come to mind here: 1. The point on the amount of evidence Google/Amazon not doing it provides feels related to the discussion around our corporate prediction market analysis [https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/prediction-markets-in-the-corporate-setting] . Note that I was the author who probably took the evidence that most corporations discontinued their prediction markets as the most weak (see my conclusion [https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/prediction-markets-in-the-corporate-setting#Eli_Lifland] ), though I still think it's fairly substantial. 2. I also agree with the point in your reply [https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/prediction-markets-in-the-corporate-setting?commentId=LkzaDDujG7WF5ZnKs#Eli_Lifland] that setting up prediction markets and learning from them has positive externalities, and a similar thing should apply here. 3. I agree that more data collection tools for what already happens and other innovations in that vein seem good as well!
elifland's Shortform

I think the obvious answer is that doing controlled trials in these areas is a whole lot of work/expense for the benefit.

Some things like health effects can take a long time to play out; maybe 10-50 years. And I wouldn't expect the difference to be particularly amazing. (I'd be surprised if the average person could increase their productivity by more than ~20% with any of those)

On "challenge trials"; I imagine the big question is how difficult it would be to convince people to accept a very different lifestyle for a long time. I'm not sure if it's called "challenge trial" in this case. 

1Misha_Yagudin4mo
It wouldn't shock me if an average vegan diet decreased lifetime productivity by more than 20% by malnutrition -> mental health link.
1elifland4mo
I think our main disagreement is around the likely effect sizes; e.g. I think blocking out focused work could easily have an effect size of >50% (but am pretty uncertain which is why I want the trial!). I agree about long-term effects being a concern, particularly depending on one's TAI timelines. Yeah, I'm most excited about challenges that last more like a few months to a year, though this isn't ideal in all domains (e.g. veganism), so maybe this wasn't best as the top example. I have no strong views on terminology.
Concrete Biosecurity Projects (some of which could be big)

That sounds like much of it.  To be clear, it's not that the list is obvious, but more that it seems fairly obvious that a similar list was possible. It seemed pretty clear to me a few years ago that there must be some reasonable lists of non-info-hazard countermeasures that we could work on, for general-purpose bio safety. I didn't have these particular measures in mind, but figured that roughly similar ones would be viable.

Another part of my view is,
"Could we have hired a few people to work full-time coming up with a list about this good, a few year... (read more)

Concrete Biosecurity Projects (some of which could be big)

Really happy to see this, this looks great. 

This is outside the scope of this document, but I'm a bit curious how useful it would have been to have such a list 3-5 years ago, and why it took so long. Previously I heard something like, "biosecurity is filled with info-hazards, so we can't have many people in it yet."

Anyway, it makes a lot of sense to me that we have pretty safe intervention options after all, and I'm happy to see lists being created and acted upon.

5eca4mo
thanks for the kind words! I agree that we didn't have much good stuff for ppl to do 4 yrs ago when i started in bio but don't feel like my model matches yours regarding why. But I'm also wanting to confirm I've understood what you are looking for before I ramble. How much would you agree with this description of what I could imagine filling in from what you said re 'why it took so long': "well I looked at this list of projects, and it didn't seem all that non-obvious to me, and so the default explanation of 'it just took a long time to work out these projects' doesn't seem to answer the question" (TBC, I think this would be a very reasonable read of the piece, and I'm not interpreting your question to be critical tho also obviously fine if it is hahah)

The authors will have a more-informed answer, but my understanding is that part of the answer is "some 'disentanglement' work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios)."

I mention this so that I can bemoan the fact that I think we don't have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). Th... (read more)

7Davidmanheim4mo
More than infohazards, we were still building capacity and understanding of the area. But many of these were highlighted in earlier work, including decade of reports from Center for Health Security, etc. (Not to mention my paper with Dave Denkenberger [https://www.researchgate.net/publication/339874624_Review_of_Potential_High-Leverage_and_Inexpensive_Mitigations_for_Reducing_Risk_in_Epidemics_and_Pandemics] .)

I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It's a lot less risky to openly talk about those than about technologies like antivirals or vaccines.

I'm happy to see this shift, as concrete lists like this will likely motivate more people to enter the space. 

4Bridges4mo
I don't think any of the info hazards are mentioned here, but you're right that good lists like this are a long time coming. I haven't heard that biosec folks actively didn't want people in the field though-- would be interested in who said that.
The phrase “hard-core EAs” does more harm than good

I don't really use the word myself (at least, I don't remember using it), but I sometimes do say things like "intense utilitarian" or "intense worker."

I'd vote against "Drank the kool-aid EAs.”  It's a super dark metaphor; of an altruistic sect that turned into a cult and committed mass suicide.  I get that it's joking, but it feels like a bit much for me. 

https://en.wikipedia.org/wiki/Drinking_the_Kool-Aid
 

The phrase originates from events in Jonestown, Guyana, on November 18, 1978, in which over 900 members of the Peoples Temple movem

... (read more)
3Harrison Durland5mo
This ^ I immediately went to the comments to make the same point when I read that (and re-read it twice to make sure it wasn’t just satire).
Prediction Markets in The Corporate Setting

On the whole I liked this a lot, and I broadly agree. 

Around "academics being too optimistic": I've seen similar a few times before and am pretty tired of it at this point. I'm happy that interesting ideas are brought forward, but I think the bias is pretty harmful. In fairness though, this is really a community issue; if our community epistemics were better, than the overconfidence of academic takes wouldn't have lead to much overconfidence of community beliefs.

Some thoughts:
1.  I agree that the implementation of "general purpose many-emplo... (read more)

3Paal Fredrik Skjørten Kvarberg4mo
On 4., I very much agree that this section could be more nuanced by mentioning some positive side-effects as well. There might be many managers who fear being undermined by their employees. And surely many employees might feel shameful if they are wrong all the time. However, I think the converse is also true. That managers are insecure, and would love for the company to take decisions on complex hard to determine issues collectively. And that employees would like an arena to express their thoughts on things (where their judgments are heard, and maybe even serves to influence company strategy). I think this is an important consideration that didn't get through very clearly. There are other plausible goods of prediction markets that aren't mentioned in the value prop, but which might be relevant to their expected value.
4NunoSempere5mo
I think I'd sort of encountered the issue theoretically, and maybe some ambiguous cases, but I researched this one at some depth, and it was more shocking. Fair point on 2. (prediction markets being too restrictive) and 3. () 4. I think is a feature of the report being aimed at a particular company, so considerations around e.g., office politics making prediction markets fail are still important. As you kind of point out, overall this isn't really the report I would have written for EA, and I'm glad I got bought out of that. 5. I don't think this is what we meant, e.g., see: I.e., we agree that small experiments (e.g., "Delphi-like automatic prediction markets built on top of dead-simple polls") are great. This could maybe have been expressed more clearly. On the other hand, I didn't really have the impression that there was someone inside Upstart willing to put in the time to do the experiments if we didn't. 6. Sure. One thing we were afraid was cultures sort of having the incentive to pretend they were more candid that they really are. Social desirability bias feels strong. 7. (experimentation having positive externalities.) Yep!
Why don't governments seem to mind that companies are explicitly trying to make AGIs?

I think market sentiment is a bit complicated. Very few investors are talking about AGI, but organizations like OpenAI still seem to think that talking about AGI is good marketing for them (for talent, and I'm sure for money, later on).  

I think most of the Anthropic investment was from people close to effective altruism: Jaan Tallinn, Dustin Moskovitz, and Center for Emerging Risk Research, for example. 
https://www.anthropic.com/news/announcement

On why those people left OpenAI, I'm not at all an expert here. I think it's common for different tea... (read more)

EA/Rationalist Safety Nets: Promising, but Arduous

That all sounds pretty good to me. I like the idea of a wide variety of means of support; both to try out more things (it's hard to tell what would work in advance), and because it's probably a better solution long-term. 

EA/Rationalist Safety Nets: Promising, but Arduous

Will do. No one comes to mind now, but if someone does, I'll let you know.

(Also, I'm sure others reading this with ideas should send them to Bob)

EA/Rationalist Safety Nets: Promising, but Arduous

Good point about focusing on money; this post was originally written differently, then I tried making it more broad, but I think it wound up being more disjointed than I would have liked.

First, I’d also be very curious about interventions other than money.

Second though, I think that “money combined with services” might be the most straightforward strategy for most of the benefits except for friends.

“Pretty strong services” to help set up people with mental and physical health support could exist, along with insurance setups. I think that setting up new ser... (read more)

EA/Rationalist Safety Nets: Promising, but Arduous

I think this is a serious question.

One big question is is this would be viewed more as a "community membership" thing or as a "directly impactful" intervention. I could imagine both being pretty different from one another.

I think personally I'm more excited by the second, because it seems more scalable. 

The way I would view the "utilitarian intervention" version would be pretty intense, and much unlike almost all social programs, but it would be effective.
1. "Fairness" is a tricky word. The main thing that matters is who's expected to produce value.&n... (read more)

1Charles He5mo
Random comment: Do you or anyone else have any comments about the use of terminology with negative connotations, like “gatekeeping” or “elite”? Background (unnecessary to read): Basically I’ve been using the word “gatekeeping” a fair bit. This word seems to be an accurate description of principled, prosocial activity to create functional teams or institutions. It includes activities no one finds surprising there is control over, such as grant making. To see this another way, basically, someone somewhere (Party A) has given funding to achieve maximum impact for something (Party B), and we need people (Party C) to cause this to happen in some way. We owe Party A and B a lot, and that usually includes some sort of selection/control over party C. Also, I think that “gatekeeping” seems particularly important in the early stages of founding a cause area or set of initiatives, where such activity seems necessary or has to occur by definition. In these situations, it seems less vulnerable to real or perceived abuse or at least insularity, at the same time it seems useful and virtuous to signpost and explain what gatekeeping is and what the parameters and intentions are. However, gatekeeping is basically a slur in common use [https://www.urbandictionary.com/define.php?term=Gatekeeping]. Now, “elite” has the same problem ("elitism"). It is also an important, genuine and technical thing to consider and signpost, but it can also be associated with real or perceived misuse. Maybe it’s tenable if I use just "gatekeeping". I’m worried if I start passing docs, posts or comments around, filled mention of both “gatekeeping” and “elites" and terms of art from who knows what else (from various disciplines, not just EA), it might offend or at least look insensitive. I guess I can change the words with another. However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example imposing cognitive/jargon cost
1Charles He5mo
Hi Ozzie, This seems excellent and I learned a lot from this comment and your post. I agree with the impactfulness argument you have made and its potential. It seems important in being much larger scale. It might even ease other types of giving into the community somehow (because you might develop a competent, strong institution). It's also impactful, by design. Also, as you suggest, finding very valuable, non-EA people to execute causes seems like a pure win [1]. Now, it seems I have a grant by a major funder of EA longtermism projects. Related to this, I am researching (or really just talking about) a financial aid project to what you described. This isn't approved or even asked for by the grant maker, but there seems to be some possibility it will happen. (But not more than a 50% chance though). Your thoughts would be valuable and I might contact you. I might copy and paste some content from the document into the above comment to get feedback and ideas. [1] But finding and funding such people also seems difficult. My guess that people who do this well (e.g. Peter Thiel of Thiel Fellows) are established in related activities or connected, to an extraordinary degree. My guess is that this activity of finding and choosing people seems structurally similar to grant making, such as GiveWell. I think that successive grantmakers for alternate causes in EA have a mixed track record compared to the original. Maybe this is because the inputs are deceptively hard and somewhat illegible from the outside.
EA/Rationalist Safety Nets: Promising, but Arduous

Yep. Sorry, I didn't mean to make it seem like it was. Changed. 

EA/Rationalist Safety Nets: Promising, but Arduous

I agree that your proposal gets around most (maybe all?) of the issues I mentioned. However, your proposal focuses on earning-to-givers who have already given a fair bit, this seems to be tackling a minority of the problem (maybe 20%?). Maybe this is a good place to begin. I feel like I haven't met many people in this specific camp, but maybe there are more out there. 
 

Do you agree with this?

I'm happy to see it on a small scale. That said, the existing discussion/debate doesn't seem like all too much to me. I also feel like there could be some ea... (read more)

1bob5mo
Ah, that's where we went wrong. I assumed you would have mentioned that if you thought so. I agree, and it is quite challenging to determine the size of that minority. If anyone knows anyone who has been in this situation, please send me a message.
EA/Rationalist Safety Nets: Promising, but Arduous

Agreed that higher salaries could help (and are already helping). Another nice benefit is that they can also be useful for the broader community; more senior people will have more money to help out more junior people, here and there.

I imagine if there were an insurance product, it would be subsidized a fair amount. My hope would be that we could have more trust than would exist for a regular insurance agency, but I'm not sure how big of a deal this would make.

13 Very Different Stances on AGI

Yea; this was done with a search was for "AGI". There's no great semantic search yet, but I could see that as a thing in the future.
I added a quick comment in this section about it.

Stefan_Schubert's Shortform

+1

That said, I think I might even more prefer some sort of emoji system, where there were emojis to represent each of the 4 dimensions, but also the option to have more emojis. 

Ozzie Gooen's Shortform

Epistemic status: I feel positive about this, but note I'm kinda biased (I know a few of the people involved, work directly with Nuno, who was funded)

ACX Grants just announced.~$1.5 Million, from a few donors that included Vitalik.

https://astralcodexten.substack.com/p/acx-grants-results

Quick thoughts:

  • In comparison to the LTFF, I think the average grant is more generically exciting, but less effective altruist focused. (As expected)
  • Lots of tiny grants (<$10k), $150k is the largest one.
  • These rapid grant programs really seem great and I look forward to the
... (read more)
Prioritization Research for Advancing Wisdom and Intelligence

Relevant:
I just came across this LessWrong post about a digital tutor that seemed really impressive. It would count there.

https://www.lesswrong.com/posts/vbWBJGWyWyKyoxLBe/darpa-digital-tutor-four-months-to-total-technical-expertise

Democratising Risk - or how EA deals with critics

I might be able to provide a bit of context:

I think the devil is really in the details here. I think there are some reasonable versions of this. 

The big question is why and how you're criticizing people, and what that reveals about your beliefs (and what those beliefs are).

As an extreme example, imagine if a trusted researcher came out publicly, saying,
"EA is a danger to humanity because it's stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA dow... (read more)

This is all reasonable but none of your comment addresses the part where I'm confused. I'm confused about someone saying something that's either literally the following sentence, or identical in meaning to: 

"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding." 

If I were a funder, and I were funding researchers, I'd be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.


That part of the example ma... (read more)

13 Very Different Stances on AGI

Interesting, thanks for sharing! 

I imagine many others here (myself included) will be skeptical of the parts:
1. Narrow AI will be just as good, particularly for similar development costs. (To me it seems dramatically more work-intense to make enough narrow AIs)
2. The idea that really powerful narrow AIs won't speed up AGIs and similar. 
3. Your timelines (very soon) might be more soon than others here, though it's not clear exactly what "possible" means. (I'm sure some here would put some probability mass 5-10yrs out, just a different amount)
 ... (read more)

1Anthony Repetto5mo
[[Addendum: narrow AI now only needs ten examples from a limited training set [https://yilundu.github.io/ndf/], in order to generalize outside that distribution... so, designing numerous narrow AI will likely be easy & automated, too, and they will proliferate and diversify the same way arthropods have. Even the language-model Codex can write functioning code for an AI system, so AutoML in general makes narrow AI feasible. I expect most AI should be as dumb as possible without failing often. And never let paperclip-machines learn about missiles!]]
3Anthony Repetto5mo
Oh, I am well aware that most folks are skeptical of "narrow AI is good enough" and "5-10yrs to AGI"! :) I am not bothered by sounding wrong to the majority. [For example, when I wrote in Feb. 2020 [https://medium.com/predict/the-coronavirus-market-7e26c9acddfe], (back when the stock market had only begun to dip and Italy hadn't yet locked-down) that the coronavirus would cause a supply-chain disruption at the moment we sought to recover & re-open, which would add a few percent to prices and prolonged lag to the system, everyone else thought we would see a "sharp V recovery in a couple months". I usually sound crazy at first.] ...and I meant "possible" in the sense that doing so would be "within the budget of a large institution". Whether they take that gamble or not is where I focus the "playing it safe might be safe" strategy. If narrow AI is good enough, then we aren't flat-footed for avoiding AGI. Promoting narrow AI applications, as a result, diminishes the allure of implementing AGI. Additionally, I should clarify that I think narrow AI is already starting to "FOOM" a little, in the sense that it is feeding itself gains with less and less of our own creative input. A self-accelerating self-improvement, though narrow AI still has humans-in-the-loop. These self-discovered improvements will accelerate AGI as well. Numerous processes, from chip layout to the material science of fabrication, and even the discovery of superior algorithms to run on quantum computers, all will see multiples that feed back into the whole program of AGI development, a sort of "distributed FOOM". Algorithms for intelligence themselves, however, probably have only a 100x or so improvement left, and those gains are likely to be lumpy. Additionally, narrow AI is likely to make enough of those discoveries soon that the work left-over for AGI is much more difficult, preventing the pot from FOOMing-over completely. [[And, a side-note: we are only now approaching the 6 year anniversary [ht
13 Very Different Stances on AGI

This sounds a lot like #3 and #1 to me. It seems like you might have slightly unique intuitions on the parameters (chance of success, our ability to do things about it), but the rough shape seems similar.

1acylhalide5mo
Makes sense. Neither #1 or #3 explicitly mentions policy and awareness work (to prevent misaligned AI being deployed) as worth doing, though, I think there are some who are optimistic about it.
13 Very Different Stances on AGI

Fair point!  I think I'll instead just encourage people to read the comments. Ideally, more elements will be introduced over time, and I don't want to have to keep on updating the list (and the title). 

4Question Mark5mo
Here's a chart I found of existential risks that includes S-risks.
13 Very Different Stances on AGI

Interesting, I wasn't at all thinking about the orthogonality thesis or moral realism when writing that.  I was thinking a bit about people who:
1) Start out wanting to do lots of tech or AI work.
2) Find out about AGI and AGI risks.
3) Conclude on some worldview where doing lots of tech or AI work is the best thing for AGI success. 

1acylhalide5mo
Got it. Does "AGI successs" mean building AGI or an aligned one?
Why don't governments seem to mind that companies are explicitly trying to make AGIs?

An, that’s really good to know… and kind of depressing. Thanks so much.

Can/should we automate most human decisions, pre-AGI?

Good idea. 

I think it's difficult to encourage people to write a huge amount of data on a website like that. Maybe you could scrape forums or something to get information.

I imagine that some specific sorts of decisions will be dramatically more tractable to work on than others. 

9fjcl5mo
Thanks! Here just another recent example: https://mobile.twitter.com/fchollet/status/1473656408713441285 [https://mobile.twitter.com/fchollet/status/1473656408713441285]
Making large donation decisions as a person focused on direct work

Some quick thoughts:

1. Kudos for donating that much, even with a direct work position.

2. I've been helping out Patrick a bit, and wound up deciding between Longview and EA Funds. Both seem pretty strong and like they could absorb more money. If you don't want to spend too much time on deciding, these seem pretty safe. (Note that longview is longtermist)

3. EA is vetting constrained, and I'd guess you would be better than many at vetting (particularly those who aren't currently funders). I'd be curious what you'd come up with if you were to spend time on thi... (read more)

3catherio5mo
I agree with this!
EA Infrastructure Fund: May–August 2021 grant recommendations

Good points. I think I agree that being able to offer grants in between $1k-$5k seems pretty useful. If they get to be a pain, I imagine there will be ways to lessen the marginal costs. 

Why don't governments seem to mind that companies are explicitly trying to make AGIs?

Sorry if my post made it seem that way, but I don't feel like I've been thinking of it that way. In fact, it's sort of worse if it's not a single actor; many different departments could have done something about this, but none of them seemed to take public action.

I'm not sure how to understand your second sentence exactly. It seems pretty different from your first sentence, from what I can tell?

A multi-actor system is constrained in ways that a group of single actors are not. Individual agencies can't do their own thing publicly, and you can't see what they are doing privately.

For the agencies that do pay attention, they can't publicly respond - and the lack of public monitoring and response by government agencies which can slap new regulations on individual companies or individuals is what separates a liberal state from a dictatorship. If US DOD notices something, they really, really aren't allowed to respond publicly, especially in ways that wo... (read more)

Can/should we automate most human decisions, pre-AGI?

I don't know if there's a high-leverage point where a few EAs or even the entire EA community can come in and bring a lot of change.

From my perspective, most decision automation is highly neglected for some reason. I don't know why, but things seem to be moving really slowly right now, especially for the big picture sorts of decisions effective altruists care about. I don't know of any startups trying to help people make career decisions using probability distributions / expected values, for example. (Or most of the other questions I listed in this documen... (read more)

1acylhalide5mo
I see. Any guesses on the reason(s)? My initial reaction is these seem hard to automate with today's AI, and a human can do a better job than an AI in 10-30 seconds. So that's all the time you'll be saving in return for inferior work. (Although I'm not an AI person, so I'm happy to be proven wrong.) I'd be keen to know if/how decision automation can increase the likelihood we elect better politicians. I see your point about it not being black-and-white, and perhaps there are narrow "easy" decisions that directly impact very important decisions. I just think it would be better to list them out specifically.
Load More