This is a special post for quick takes by Ozzie Gooen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 1:52 AM

(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)

Thoughts on the OpenAI Strategy

OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten.

First, they say flat out that they're going for AGI.

Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.

"Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1]

On Hacker News, one of their employees says,

"We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2]

You can read more about this mission on the charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3]

This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing:

  1. Make AGI
  2. Turn AGI into huge profits
  3. Give 100x returns to investors
  4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit
  5. Use AGI for "the benefit of all"?

I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like.

Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries.

I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI).

This would be a massive power gain for a small subset of people.

If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI.

On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company.

And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors.

But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves. 


(Aside on the details of Step 5)
I would love more information on Step 5, but I don’t blame OpenAI for not providing it.

  • Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people.
  • Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible.
  • I assume it’s really hard to actually put together any reasonable plan now for Step 5. 

My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious.

[1] https://openai.com/blog/openai-lp/

[2] https://news.ycombinator.com/item?id=19360709

[3] https://openai.com/charter/
[4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom

[5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/


Also, see:
https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html
Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now.

https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm

https://moores.samaltman.com/

https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/

Personal reflections on self-worth and EA

My sense of self-worth often comes from guessing what people I respect think of me and my work. 

In EA... this is precarious. The most obvious people to listen to are the senior/powerful EAs.

In my experience, many senior/powerful EAs I know:
1. Are very focused on specific domains.
2. Are extremely busy.
3. Have substantial privileges (exceptionally intelligent, stable health, esteemed education, affluent/ intellectual backgrounds.)
4. Display limited social empathy (ability to read and respond to the emotions of others)
5. Sometimes might actively try not to sympathize/empathize with many people, because they are judging them for grants, and want don't want to be biased. (I suspect this is the case for grantmakers). 
6. Are not that interested in acting as a coach/mentor/evaluator to people outside their key areas/organizations.
7. Don't intend or want others to care too much about what they think outside of cause-specific promotion and a few pet ideas they want to advance.

A parallel can be drawn with the world of sports. Top athletes can make poor coaches. Their innate talent and advantages often leave them detached from the experiences of others. I'm reminded by David Foster Wallace's How Tracy Austin Broke My Heart.

If you're a tennis player, tying your self-worth to what Roger Federer thinks of you is not wise. Top athletes are often egotistical, narrow-minded, and ambivalent to others. This sort of makes sense by design - to become a top athlete, you often have to obsess over your own abilities to an unnatural extent for a very long period.

Good managers are sometimes meant to be better as coaches than they are as direct contributors. In EA, I think those in charge seem more like "top individual contributors and researchers" than they do "top managers." Many actively dislike management or claim that they're not doing management. (I believe funders typically don't see their work as "management*", which might be very reasonable.)

But that said, even a good class of managers wouldn't fully solve the self-worth issue. Tying your self-worth too much to your boss can be dangerous - your boss already has much power and control over you, so adding your self-worth to the mix seems extra precarious.

I think if I were to ask any senior EA I know, "Should I tie my self-worth with your opinion of me?" they would say something like,

"Are you insane? I barely know you or your work. I can't at all afford the time to evaluate your life and work enough to form an opinion that I'd suggest you take really seriously."

They have enough problems - they don't want to additionally worry about others trying to use them as judges of personal value.

But this raises the question, Who, if anyone, should I trust to inform my self-worth?

Navigating intellectual and rationalist literature, I've grown skeptical of many other potential evaluators. Self-judgment carries inherent bias and ability to Goodhart. Many "personal coaches" and even "executive coaches" raise my epistemic alarm bells. Friends, family, and people who are "more junior" come with different substantial biases.

Some favored options are "friends of a similar professional class who could provide long-standing perspective" and "professional coaches/therapists/advisors."

I’m not satisfied with any obvious options here. I think my next obvious move forward is to acknowledge that my current situation seems subpar and continue reflecting on this topic. I've dug into the literature a bit but haven't found answers I've yet found compelling.

Who, if anyone, should I trust to inform my self-worth?

My initial thought is that it is pretty risky/tricky/dangerous to depend on external things for a sense of self-worth? I know that I certainly am very far away from an Epictetus-like extreme, but I try to not depend on the perspectives of other people for my self-worth. (This is aspirational, of course. A breakup or a job loss or a person I like telling me they don't like me will hurt and I'll feel bad for a while.)

A simplistic little thought experiment I've fiddled with: if I went to a new place where I didn't know anyone and just started over, then what? Nobody knows you, and you social circle starts from scratch. That doesn't mean that you don't have a worth as a human being (although it might mean that you don't have any worth in the 'economic' sense of other people wanting you, which is very different).

There might also be an intrinsic/extrinsic angle to this. If you evaluate yourself based on accomplishments, outputs, achievements, and so on, that has a very different feeling than the deep contentment of being okay as you are.

In another comment Austin mentions revenue and funding, but that seems to be a measure of things VERY different from a sense of self-worth (although I recognize that there are influential parts of society in which wealth or career success is seen as the proxies for worth). In favorable market conditions I have high self worth?

I would roughly agree with your idea of "trying not to tie my emotional state to my track record." 

I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.

I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.) 

IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".

It's funny, I think you'd definitely be in the list of people I respect and care about their opinion of me. I think it's just imposter syndrome all the way up.

Personally, one thing that seemed to work a bit for me is to find peers which I highly appreciate and respect and schedule weekly calls with them to help me prioritize and focus, and give me feedback. 

A few possibilities from startup land:

  • derive worth from how helpful your users find your product
  • chase numbers! usage, revenue, funding, impact, etc. Sam Altman has a line like "focus on adding another 0 to your success metric"
  • the intrinsic sense of having built something cool

After transitioning from for-profit entrepreneurship to co-leading a non-profit in the effective altruism space, I struggle to identify clear metrics to optimize for. Funding is a potential metric, but it is unreliable due to fluctuations in donors' interests. The success of individual programs, such as user engagement with free products or services, may not accurately reflect their impact compared to other potential initiatives. Furthermore, creating something impressive doesn't necessarily mean it's useful. 

Lacking a solid impact evaluation model, I find myself defaulting to measuring success by hours worked, despite recognizing the diminishing returns and increased burnout risk this approach entails.

[anonymous]8mo5
0
0

This is brave of you to share. It sounds like there are a few related issues going on. I have a few thoughts that may or may not be helpful:

  1. Firstly, you want to do well and improve in your work, and you want some feedback on that from people who are informed and have good judgment. The obvious candidates in the EA ecosystem are people who actually aren't well suited to give this feedback to you. This is tough. I don't have any advice to give you here. 
  2. However it also sounds like there are some therapeutic issues at play. You mention therapists as a favored option but one you're not satisfied with and I'm wondering why? Personally I suspect that making progress on any therapeutic issues that may be at play may also end up helping with the professional feedback problem. 
  3. I think you've unfairly dismissed the best option as to who you can trust: yourself. That you have biases and flaws is not an argument against trusting yourself because everyone and everything has biases and flaws! Which person or AI are you going to find that doesn't have some inherent bias or ability to Goodhart?

Five reasons why I think it's unhelpful connecting our intrinsic worth to our instrumental worth (or anything aside from being conscious beings):

  1. Undermines care for others (and ourselves): chickens have limited instrumental worth and often do morally questionable things. I still reckon chickens and their suffering are worthy of care. (And same argument for human babies, disabled people and myself)
  2. Constrains effective work: continually assessing our self-worth can be exhausting (leaving less time/attention/energy for actually doing helpful work). For example, it can be difficult to calmy take on constructive feedback (on our work, or instrumental strengths or instrumental weaknesses) when our self-worth is on the line.
  3. Constrains our personal wellbeing and relationships: I've personally found it hard to enjoy life when continuously questioning my self-worth and feeling guilty/shameful when the answer seems negative
  4. Very hard to answer: including because the assessment may need to be continuously updated based on the new evidence from each new second of our lives
  5. Seems pointless to answer (to me): how would accurately measuring our self-worth (against a questionable benchmark) make things better? We could live in a world where all beings are ranked so that more 'worthy' beings can appropriately feel superior, and less 'worthy' beings can appropriately feel 'not enough'. This world doesn't seem great from perspective

Despite thinking these things, I often unintentionally get caught up muddling my self-worth with my instrumental worth (can relate to the post and comments on here!) I've found 'mindful self-compassion' super helpful for doing less of this

This is an interesting post and seems basically right to me, thanks for sharing.

Thank you, this very much resonates with me

The most obvious moves, to me, eventually, are to either be intensely neutral (as in, trying not to tie my emotional state to my track record), or to iterate on using AI to help here (futuristic and potentially dangerous, but with other nice properties).

How would you use AI here?

A very simple example is, "Feed a log of your activity into an LLM with a good prompt, and have it respond with assessments of how well you're doing vs. your potential at the time, and where/how you can improve." You'd be free to argue points or whatever. 

Reading this comment makes me think that you are basing your self-worth on your work output. I don't have anything concrete to point to, but I suspect that this might have negative effects on happiness, and that being less outcome dependent will tend to result in a better emotional state.

That's cool. I had the thought of developing a "personal manager" for myself of some form for roughly similar purposes

I really don't like the trend of posts saying that "EA/EAs need to | should do X or Y".

EA is about cost-benefit analysis. The phrases need and should implies binaries/absolutes and having very high confidence.

I'm sure there are thousands of interventions/measures that would be positive-EV for EA to engage with. I don't want to see thousands of posts loudly declaring "EA MUST ENACT MEASURE X" and "EAs SHOULD ALL DO THING Y," in cases where these mostly seem like un-vetted interesting ideas. 

In almost all cases I see the phrase, I think it would be much better replaced with things like;
"Doing X would be high-EV"
"X could be very good for EA"
"Y: Cost and Benefits" (With information in the post arguing the benefits are worth it)
"Benefits|Upsides of X" (If you think the upsides are particularly underrepresented)"

I think it's probably fine to use the word "need" either when it's paired with an outcome (EA needs to do more outreach to become more popular) or when the issue is fairly clearly existential (the US needs to ensure that nuclear risk is low). It's also fine to use should in the right context, but it's not a word to over-use. 

Related (and classic) post in case others aren't aware: EA should taboo "EA should".

Lizka makes a slightly different argument, but a similar conclusion

Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.

I think EAs are too eager to hedge their language and use weak language regarding promising ideas.

For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.

https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the

What about social norms, like "EA should encourage people to take care of their mental health even if it means they have less short-term impact"?

Good question.

First, I have a different issue with that phrase, as it's not clear what "EA" is. To me, EA doesn't seem like an agent. You can say, "....CEA should" or "...OP should".

Normally, I prefer one says "I think X should". There are some contexts, specifically small ones (talking to a few people, it's clearly conversational) where saying, "X should do Y" clearly means "I feel like X should do Y, but I'm not sure". And there are some contexts where it means "I'm extremely confident X should do Y".

For example, there's a big difference between saying "X should do Y" to a small group of friends, when discussing uncertain claims, and writing a mass-market book titled "X should do Y". 

I haven't noticed this trend, could you list a couple of articles like this? Or even DM me if you're not comfortable listing them here.

There are a couple of strong "shoulds" in the EA Handbook (I went through it over the last two months as part of an EA Virtual program) and they stood out to me as the most disagreeable part of EA philosophy that was presented.

Some musicians have multiple alter-egos that they use to communicate information from different perspectives. MF Doom released albums under several alter-egos; he even used these aliases to criticize his previous aliases.

Some musicians, like Madonna, just continued to "re-invent" themselves every few years.

Youtube personalities often feature themselves dressed as different personalities to represent different viewpoints. 

It's really difficult to keep a single understood identity, while also conveying different kinds of information.

Narrow identities are important for a lot of reasons. I think the main one is predictability, similar to a company brand. If your identity seems to dramatically change hour to hour, people wouldn't be able to predict your behavior, so fewer could interact or engage with you in ways they'd feel comfortable with.

However, narrow identities can also be suffocating. They restrict what you can say and how people will interpret that. You can simply say more things in more ways if you can change identities. So having multiple identities can be a really useful tool.

Sadly, most academics and intellectuals can only really have one public identity.

---

EA researchers currently act this way.

In EA, it's generally really important to be seen as calibrated and reasonable, so people correspondingly prioritize that in their public (and then private) identities. I've done this. But it comes with a cost.

One obvious (though unorthodox) way around this is to allow researchers to post content either under aliases. It could be fine if the identity of the author is known, as long as readers can keep these aliases distinct.

I've been considering how to best do this myself. My regular EA Forum name is just "Ozzie Gooen". Possible aliases would likely be adjustments to this name.

- "Angry Ozzie Gooen" (or "Disagreeable Ozzie Gooen")

- "Tech Bro Ozzie Gooen"

- "Utility-bot 352d3"

These would be used to communicate in very different styles, with me attempting what I'd expect readers to expect of those styles.

(Normally this is done to represent viewpoints other than what they have, but sometimes it's to represent viewpoints they have, but wouldn't normally share)

Facebook Discussion

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

A few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches.

Lots of confident-looking pictures of people with fancy and impressive sounding projects.

I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused.

So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far think about this, and a few considerations that they haven’t yet flagged, but note that I’m really unsure about all of this.”

Many of these problems seem way harder than we’d like for them to be, and much harder than many seem to assume at first. (perhaps this is due to unreasonable demands for rigor, but an alternative here would be itself a research effort).

I imagine a lot of researchers assume they won’t stand out unless they seem to make bold claims. I think this isn’t true for many EA key orgs, though it might be the case that it’s good for some other programs (University roles, perhaps?).

Not sure how to finish this post here. I think part of me wants to encourage junior researchers to lean on humility, but at the same time, I don’t want to shame those who don’t feel like they can do so for reasons of not-being-homeless (or simply having to leave research). I think the easier thing is to slowly spread common knowledge and encourage a culture where proper calibration is just naturally incentivized.

Facebook Thread

Relevant post by Nuño: https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers?fbclid=IwAR1M0zumAQ452iOAOVKGEcOdI4MwORfVSX4H1S2zLhyUXrWjarvUt31mKsg

Could/should altruistic activist investors buy lots of Twitter stock, then pressure them to do altruistic things?

---

So, Jack Dorsey just resigned from Twitter.

Some people on Hacker News are pointing out that Twitter has had recent issues with activist investors, and that this move might make those investors happy.

https://pxlnv.com/linklog/twitter-fleets-elliott-management/

From a quick look... Twitter stock really hasn't been doing very well. It's almost back at its price in 2014.

Square, Jack Dorsey's other company (he was CEO of two), has done much better. Market cap of over 2x Twitter ($100B), huge gains in the last 4 years.

I'm imagining that if I were Jack... leaving would have been really tempting. On one hand, I'd have Twitter, which isn't really improving, is facing activist investor attacks, and worst, apparently is responsible for global chaos (of which I barely know how to stop). And on the other hand, there's this really tame payments company with little controversy.

Being CEO of Twitter seems like one of the most thankless big-tech CEO positions around.

That sucks, because it would be really valuable if some great CEO could improve Twitter, for the sake of humanity.

One small silver lining is that the valuation of Twitter is relatively small. It has a market cap of $38B. In comparison, Facebook/Meta is $945B and Netflix is $294B.

So if altruistic interests really wanted to... I imagine they could become activist investors, but like, in a good way? I would naively expect that even with just 30% of the company you could push them to do positive things. $12B to improve global epistemics in a major way.

The US could have even bought Twitter for 4% of the recent $1T infrastructure bill. (though it's probably better that more altruistic ventures do it).

If middle-class intellectuals really wanted it enough, theoretically they could crowdsource the cash.

I think intuitively, this seems like clearly a tempting deal.

I'd be curious if this would be a crazy proposition, or if this is just not happening due to coordination failures.

Admittingly, it might seem pretty weird to use charitable/foundation dollars on "Buying lots of Twitter" instead of direct aid, but the path to impact is pretty clear.


Facebook Thread

One futarchy/prediction market/coordination idea I have is to find some local governments and see if we could help them out by incorporating some of the relevant techniques.

This could be neat if it could be done as a side project. Right now effective altruists/rationalists don't actually have many great examples of side projects, and historically, "the spare time of particularly enthusiastic members of a jurisdiction" has been a major factor in improving governments.

Berkeley and London seem like natural choices given the communities there. I imagine it could even be better if there were some government somewhere in the world that was just unusually amenable to both innovative techniques, and to external help with them.

Given that EAs/rationalists care so much about global coordination, getting concrete experience improving government systems could be interesting practice.

There's so much theoretical discussion of coordination and government mistakes on LessWrong, but very little discussion of practical experience implementing these ideas into action.

(This clearly falls into the Institutional Decision Making camp)

Facebook Thread

On AGI (Artificial General Intelligence):

I have a bunch of friends/colleagues who are either trying to slow AGI down (by stopping arms races) or align it before it's made (and would much prefer it be slowed down).

Then I have several friends who are actively working to *speed up* AGI development. (Normally just regular AI, but often specifically AGI)[1]

Then there are several people who are apparently trying to align AGI, but who are also effectively speeding it up, but they claim that the trade-off is probably worth it (to highly varying degrees of plausibility, in my rough opinion).

In general, people seem surprisingly chill about this mixture? My impression is that people are highly incentivized to not upset people, and this has led to this strange situation where people are clearly pushing in opposite directions on arguably the most crucial problem today, but it's all really nonchalant.

[1] To be clear, I don't think I have any EA friends in this bucket. But some are clearly EA-adjacent.

More discussion here: https://www.facebook.com/ozzie.gooen/posts/10165732991305363

There seem to be several longtermist academics who plan to spend the next few years (at least) investigating the psychology of getting the public to care about existential risks.
 

This is nice, but I feel like what we really could use are marketers, not academics. Those are the people companies use for this sort of work. It's somewhat unusual that marketing isn't much of a respected academic field, but it's definitely a highly respected organizational one.

There are at least a few people in the community with marketing experience and an expressed desire to help out. The most recent example that comes to mind is this post.

If anyone reading this comment knows people who are interested in the intersection of longtermism and marketing, consider telling them about EA Funds! I can imagine the LTFF or EAIF being very interested in projects like this.

(That said, maybe one of the longtermist foundations should consider hiring a marketing consultant?)

Yep, agreed. Right now I think there are very few people doing active work in longtermism (outside of a few orgs that have people for that org), but this seems very valuable to improve upon. 

If you're happy to share, who are the longtermist academics you are thinking of? (Their work could be somewhat related to my work)

No prominent ones come to mind. There are some very junior folks I've recently seen discussing this, but I feel uncomfortable calling them out.

When discussing forecasting systems, sometimes I get asked,

“If we were to have much more powerful forecasting systems, what, specifically, would we use them for?”

The obvious answer is,

“We’d first use them to help us figure out what to use them for”

Or,

“Powerful forecasting systems would be used, at first, to figure out what to use powerful forecasting systems on”

For example,

  1. We make a list of 10,000 potential government forecasting projects.
  2. For each, we will have a later evaluation for “how valuable/successful was this project?”.
  3. We then open forecasting questions for each potential project. Like, “If we were to run forecasting project #8374, how successful would it be?”
  4. We take the top results and enact them.

Stated differently,

  1.  Forecasting is part of general-purpose collective reasoning.
  2. Prioritization of forecasting requires collective reasoning.
  3. So, forecasting can be used to prioritize forecasting.

I think a lot of people find this meta and counterintuitive at first, but it seems pretty obvious to me.

All that said, I can’t be sure things will play out like this. In practice, the “best thing to use forecasting on” might be obvious enough such that we don’t need to do costly prioritization work first. For example, the community isn’t currently doing much of this meta stuff around Metaculus. I think this is a bit mistaken, but not incredibly so.

Facebook Thread

I’m sort of hoping that 15 years from now, a whole lot of common debates quickly get reduced to debates about prediction setups.

“So, I think that this plan will create a boom for the United States manufacturing sector.”

“But the prediction markets say it will actually lead to a net decrease. How do you square that?”

“Oh, well, I think that those specific questions don’t have enough predictions to be considered highly accurate.”

“Really? They have a robustness score of 2.5. Do you think there’s a mistake in the general robustness algorithm?”

—-

Perhaps 10 years later, people won’t make any grand statements that disagree with prediction setups.

(Note that this would require dramatically improved prediction setups! On that note, we could use more smart people working in this!)

Facebook Thread

I made a quick Manifold Market for estimating my counterfactual impact from 2023-2030. 

One one hand, this seems kind of uncomfortable - on the other, I'd really like to feel more comfortable with precise and public estimates of this sort of thing.

Feel free to bet!

Still need to make progress on the best resolution criteria. 

 

If someone thinks LTFF is net negative, but your work is net positive, should they answer in the negative ranges?

Yes. That said, this of course complicates things. 

Note that while we'll have some clarity in 2030, we'd presumably have less clarity than at the end of history (and even then things could be murky, I dunno)

For sure. This would just be the mean estimate, I assume. 

Epistemic status: I feel positive about this, but note I'm kinda biased (I know a few of the people involved, work directly with Nuno, who was funded)

ACX Grants just announced.~$1.5 Million, from a few donors that included Vitalik.

https://astralcodexten.substack.com/p/acx-grants-results

Quick thoughts:

  • In comparison to the LTFF, I think the average grant is more generically exciting, but less effective altruist focused. (As expected)
  • Lots of tiny grants (<$10k), $150k is the largest one.
  • These rapid grant programs really seem great and I look forward to them being scaled up.
  • That said, the next big bottleneck (which is already a bottleneck) is funding for established groups. These rapid grants get things off the ground, but many will need long-standing support and scale.
  • Scott seems to have done a pretty strong job researching these groups, and also has had access to a good network of advisors. I guess it's no surprise; he seems really good at "doing a lot of reading and writing", and he has an established peer group now.
  • I'm really curious how/if these projects will be monitored. At some point, I think more personnel would be valuable.
  • This grant program is kind of a way to "scale up" Astral Codex Ten. Like, instead of hiring people directly, he can fund them this way.
  • I'm curious if he can scale up 10x or 1000x, we could really use more strong/trusted grantmakers. It's especially promising if he gets non-EA money. :)

On specific grants:

  • A few forecasters got grants, including $10k for Nuño Sempere Lopez Hidalgo for work on Metaforecast. $5k for Nathan Young to write forecasting questions.
  • $17.5k for 1DaySooner/Rethink Priorities to do surveys to advance human challenge trials.
  • $40k seed money to Spencer Greenberg to "produce rapid replications of high-impact social science papers". Seems neat, I'm curious how far $40k alone could go though.
  • A bunch of biosafety grants. I like this topic, seems tractable.
  • $40k for land value tax work.
  • $20k for a "Chaotic Evil" prediction market. This will be interesting to watch, hopefully won't cause net harm.
  • $50k for the Good Science Project, to "improve science funding in the US". I think science funding globally is really broken, so this warms my heart.
  •  Lots of other neat things, I suggest just reading directly.

You could use prediction setups to resolve specific cruxes on why prediction setups outputted certain values.

My guess is that this could be neat, but also pretty tricky. There are lots of "debate/argument" platforms out there, it's seemed to have worked out a lot worse than people were hoping. But I'd love to be proven wrong.
 

P.S. I'd be keen on working on this, how do I get involved?

If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to forecasting projects more generally, there are different forecasting jobs and stuff popping up. Metaculus has been doing some hiring. You could also do academic research in the space. Another option is getting an EA Funds grant and pursuing a specific project (though I realize this is tricky!)

The following things could both be true:

1) Humanity has a >80% chance of completely perishing in the next ~300 years.

2) The expected value of the future is incredibly, ridiculously, high!

The trick is that the expected value of a positive outcome could be just insanely great. Like, dramatically, incredibly, totally, better than basically anyone discusses or talks about.

Expanding to a great deal of the universe, dramatically improving our abilities to convert matter+energy to net well-being, researching strategies to expand out of the universe.

A 20%, or even a 0.002%, chance at a 10^20 outcome, is still really good.

One key question is the expectation of long-term negative[1] vs. long-term positive outcomes. I think most people are pretty sure that in expectation things are positive, but this is less clear.

So, remember:

Just because the picture of X-risks might look grim in terms of percentages, you can still be really optimistic about the future. In fact, many of the people most concerned with X-risks are those *most* optimistic about the future.

I wrote about this a while ago, here:

https://www.lesswrong.com/.../critique-my-model-the-ev-of...

[1] Humanity lasts, but creates vast worlds of suffering. "S-risks"


https://www.facebook.com/ozzie.gooen/posts/10165734005520363

Opinions on charging for professional time?

(Particularly in the nonprofit/EA sector)

I've been getting more requests recently to have calls/conversations to give advice, review documents, or be part of extended sessions on things. Most of these have been from EAs.

I find a lot of this work fairly draining. There can be surprisingly high fixed costs to having a meeting. It often takes some preparation, some arrangement (and occasional re-arrangement), and a fair bit of mix-up and change throughout the day.

My main work requires a lot of focus, so the context shifts make other tasks particularly costly.

Most professional coaches and similar charge at least $100-200 per hour for meetings. I used to find this high, but I think I'm understanding the cost more now. A 1-hour meeting at a planned time costs probably 2-3x as much time as a 1-hour task that can be done "whenever", for example, and even this latter work is significant.

Another big challenge is that I have no idea how to prioritize some of these requests. I'm sure I'm providing vastly different amounts of value in different cases, and I often can't tell.

The regular market solution is to charge for time. But in EA/nonprofits, it's often expected that a lot of this is done for free. My guess is that this is a big mistake. One issue is that people are "friends", but they are also exactly professional colleagues. It's a tricky line.

One minor downside of charging is that it can be annoying administratively. Sometimes it's tricky to get permission to make payments, so a $100 expense takes $400 of effort.

Note that I do expect that me helping the right people, in the right situations, can be very valuable and definitely worth my time. But I think on the margin, I really should scale back my work here, and I'm not sure exactly how to draw the line.

[All this isn't to say that you shouldn't still reach out! I think that often, the ones who are the most reluctant to ask for help/advice, represent the cases of the highest potential value. (The people who quickly/boldly ask for help are often overconfident). Please do feel free to ask, though it's appreciated if you give me an easy way out, and it's especially appreciated if you offer a donation in exchange, especially if you're working in an organization that can afford it.]

https://www.facebook.com/ozzie.gooen/posts/10165732727415363

[comment deleted]2y2
0
0
Curated and popular this week
Relevant opportunities