All of Charles He's Comments + Replies

Sort forum posts by: Occlumency (Old & Upvoted)

That’s a really good point. There’s many consequent issues beyond the initial update, including the iterative issue of multiple induced “rounds of updating” mentioned in your comment.

After some thought, I think I am confident the issue you mentioned is small.

  • First, note that there is an end point to this process, eg a “fixed point” that the rounds stop.
  • Based on some guesses, the second and subsequent round of promotions gets much much smaller in number of people affected (as opposed to a process that explodes). This is because the karma and vote power s
... (read more)
2Linch2h
Charles is right. The backend engineering for this won't be trivial, but it isn't hard either. The algorithmic difficulty seems roughly on the level of what I'd expect a typical software engineering interview in a Silicon Valley tech company to look like, maybe a little easier, though of course the practical implementation might be difficult if you want to account for all of the edge cases in the actual code and database. The computational costs is likely trivial in comparison. Like it's mathematically equivalent to if every newly promoted user just unupvoted and reupvoted again. On net you're looking at an at-most 2x increase in the load on the upvote system, which I expect to be a tiny increase in total computational costs, assuming that the codebase has an okay design.
Organizational alignment

This is insightful!

Personally, I would consider appending “for onlookers”, in this particular instance, as the OP is probably extremely versed in the issues and has a strategy that considers these tradeoffs.

2Khorton4h
Yes for sure, it was meant to be a "yes and" to the post, not a criticism of Caroline!
Announcing the Future Fund

Woah.

If you click the name of the UCSC person, this person has two comments, months apart. So presumably their question wasn’t answered.

The first comment has a -5 downvote.

It’s hard to rationalize this, and it’s a bad look. Maybe people didn’t like UC or “normal” non EA people or institutions applying, but it seems unlikely this would be the worst pool of applicants.

Announcing the Future Fund

(It’s sort of bizarre this was downvoted).

A number of institutions needed entity information. It seemed like this was a blocker for applications.

It’s not really the fault of FTX, but not immediately having this information and having a lot of people ask, might have contributed to the sentiment that lead to the recent posts clarifying FTX’s work.

2Charles He11h
Woah. If you click the name of the UCSC person, this person has two comments, months apart. So presumably their question wasn’t answered. The first comment has a -5 downvote. It’s hard to rationalize this, and it’s a bad look. Maybe people didn’t like UC or “normal” non EA people or institutions applying, but it seems unlikely this would be the worst pool of applicants.
Rational predictions often update predictably*

The two quotes seem to be explicitly, specifically about the general process of “updating” (although with some stretching, the first quote below could also be rationalized by saying the forecasters are obtuse and incredibly bad—but if you think people are allowed to have different beliefs, then the trend indicated should occur).

To be a slightly better Bayesian is to spend your entire life watching others slowly update in excruciatingly predictable directions that you jumped ahead of 6 years earlier so that your remaining life could be a random epistemic

... (read more)
Sort forum posts by: Occlumency (Old & Upvoted)

Someone I know has worked with databases of varying sizes, sometimes in a low level, mechanical sense . From my understanding, to update all of a person’s votes, the database operation is pretty simple, simply scanning the voting table for that ID and then doing a little arithmetic for each upvote and calling another table or two.

You would only need to do the above operation for each “newly promoted” user, which is like maybe a few dozen users a day at the worst.

Many personal projects involve heavier operations. I’m not sure but a google search might be 100x more complicated.

7Arepo5h
But in the process you might also promote other users - so you'd have to check for each recipient of strong upvotes if that was so, and then repeat the process for each promoted user, and so on.
What share of British adults are vegetarian, vegan, or flexitarian?

Someone I know has spoken to a dozen senior researchers and others in EA animal welfare.

The resulting understanding is that the actual number of vegans or vegetarians are consistent with the numbers in the above comment (1-2% in the USA, UK and CAD).

Another important fact seems to be that these numbers have not changed despite decades of dietary change information and campaigning at the individual, consumer level.

This seems to be relevant when making plans in animal welfare about which interventions to pursue.

Norms and features for the Forum

I think all time costs stated are time costs to author of the post.

From a product and ML implementation perspective and for the NLP component of the problem, I think in this case, it might be easy to build an 80% good solution.

It’s less that the system will find and understand all arguments but more like the author might be asked questions and it’s relatively easy to see if the answers cover the same space as the post content.

My guess is that manipulation won’t make sense or even enter into peoples minds (with other design choices not related to the NLP) so a useful system that say gives guide rails is much easier to implement.

3Robi Rahman1d
Ah, you're right, I misinterpreted it since the epistemic status suggestion said time per post and that one didn't.
Rational predictions often update predictably*

I’m not sure but my guess of the argument of the OP is that:

Let’s say you are an unbiased forecaster. You get information as time passes. When you start with a 60% prediction that event X will happen, on average, the evidence you will receive will cause you to correctly revise your prediction towards 100%.

Scott Alexander noted curiosity about this behaviour; Eliezer Yudkowsky has confidently asserted it is an indicator of sub-par Bayesian updating.

<Eyes emoji>

Norms and features for the Forum

(This is a marginal comment that doesn’t need a reply, it is running into gadfly territory).

Obviously, writing a good summary is costly and the proposed philosophical/axiom like writing is further harder.

One guess of what you’re getting at, is that (somehow making) the rewarding of such summaries a strong norm would improve the forum and this would filter out bad thinking and the very act of such writing would improve discourse and even thought, a la sort of the point of philosophy in the first place.

I guess one issue is that I am skeptical this could be i... (read more)

2Gavin1d
It's very easy to write code that relaxes these constraints for new users, which should serve the friction reduction goal, if that's a goal we should have. I have no illusions about the easiness of norm setting, hence code first. This post is a nudge in the direction I want; this is all I wish to do at the mo. Good summaries are very hard, but a bad summary is better than no summary. These small changes do not need to solve the whole problem to be worthwhile.
Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter

I really like the some of the references to vaporwave (“nostalgia and longing for a past that never existed”).

My first guess is that it will be deceptively hard for EA to find and get an artist/auteur/vision and execution.

I think it’s deceptively hard to be good at art, sort of like how it’s hard to be good at meta.

My concern is that one of the failure modes will occur:

  • due to issues with talent strangling of an“intellectual poverty trap” and talent bouncing off, supply is low and the actual best visions for how aesthetics could be used never even gets p
... (read more)
8Yitz1d
I would strongly support doing this—I have strong roots in the artistic world, and there are many extremely talented artists online that I think could potentially be of value to EA.
Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter

I didn’t read your post, but I looked at the pictures, and based on that, my guess is that your ideas and perspectives are great.

I think one of the comments missing on “Where’s Today’s Beethoven” was how much of an epic asshole the actual Beethoven was. He was such a pain to deal with and malignantly belligerent down to the smallest things. I suspect this might touch on an answer to what that post was asking.

6Charles He1d
I really like the some of the references to vaporwave (“nostalgia and longing for a past that never existed”). My first guess is that it will be deceptively hard for EA to find and get an artist/auteur/vision and execution. I think it’s deceptively hard to be good at art, sort of like how it’s hard to be good at meta. My concern is that one of the failure modes will occur: * due to issues with talent strangling of an“intellectual poverty trap” and talent bouncing off, supply is low and the actual best visions for how aesthetics could be used never even gets presented to EA. * compromise occurs and what achieved is piecemeal and mediocre and people can’t see this * the actual vision gets presented but no one can really see or understand it, because they rightfully fear the person is a crank With great uncertainty, my guess is that one solution is to give (multiple) exploratory commissions to potential artists/designers/architects of this strategy (who will be of very high ability) AND bake in high quality, high insight senior EA supervision in this process. Then present the solution (with the senior EA fronting/socializing the writing and presenting to EA, not the artist) and then have everyone grit their teeth and do it.
Norms and features for the Forum

As mentioned in the post, it seems possible that getting a lot of people to do a lot of work (and by the way, fighting/undoing alot of unseen/unspoken optimizations that exist) could be impractical.

I think solving this requires “fluency” in instigation and this is one of two limiting factors. Many solutions don’t make it past this step.

The other limiting factor that the seating of norms and consequent effects seems really hard.

For example, some the proposed features would produce artifacts that would bounce off talented newcomers.

I know someone who worked ... (read more)

8Gavin1d
I might have been unclear: my shorter UI suggestions (epistemic status and summary) could be required by the server. Behaviour change solved.
Sort forum posts by: Occlumency (Old & Upvoted)

Your idea is still viable and useful!

There’s also valuable discussion and ideas that came up. IMO you deserve at least 80% of the credit for these, as they arose from your post.

Sort forum posts by: Occlumency (Old & Upvoted)

Hmm, maybe we are talking about different things, but I think the /all pages already breaks down posts by year.

So that seems to mitigate a lot of the problem I think you are writing about (less if within year inflation is high)?

I also think your post is really thoughtful, deep and helpful.

7Emrik1d
Oh. It does mitigate most of the problem as far as I can tell. Good point Oo
EA and the current funding situation

I think it’s important that this actually involves staking social capital because I would otherwise find such revision of very negative behaviour based on what is clearly external friendship (as well as the mass upvoting) more problematic than anything else that has occurred.

Imagine if everyone did this for their friends/enemies on the forum.

Sort forum posts by: Occlumency (Old & Upvoted)

A quick guess is that a good way to implement this (once a definition for “old” posts is given) is to track instances of people upvoting an old post (or just karma accumulation of old posts).

Then some score based on this (which itself can decay) can be blended into the regular “hotness” mix, so the people can see some oldie goldies.

This might be better than “naively” scaling up posts by compensating for traffic, because:

  • in this idea, older posts will tend to be promoted by relevance (e.g some day EA can solve “really hard to find an EA job” and it doesn

... (read more)
7Emrik2d
Oh, this is wonderfwl. But to be clear, Occlumency wouldn't be the front page. It would one of several ways to sort posts when you go to /all posts [https://forum.effectivealtruism.org/allPosts]. Oldie goldies is a great idea for the frontpage, though!
Results from the First Decade Review

Uh, I spent 45 seconds looking at this, but it looks like the final determinative score was created by doubling the>1000 karma weighted votes score and adding it to the <1000 karma weighted vote score.

The above thought might be noise and not what you’re talking about (but this is because the voting formula is admittedly convoluted and not super clearly documented, it reads like quadratic voting passed through a few different hands without a clear owner).

7Ben Pace3d
Took me a while to find where you got your 2x+y from, I see it's visible if you highlight the cells in the sheet. Here's a sheet with the score as sorted by the top 1k people, which is what I was interested in seeing: https://docs.google.com/spreadsheets/d/1VODS3-NrlBTnSMbGibhT4M2FpmfT-ojaPTEuuFIk9xc/edit?usp=sharing [https://docs.google.com/spreadsheets/d/1VODS3-NrlBTnSMbGibhT4M2FpmfT-ojaPTEuuFIk9xc/edit?usp=sharing]
Results from the First Decade Review

Can you write a bit more about what you mean? What voting results? Why would it be obvious that you could back this out?

I don’t remember the details but I remember thinking the quadratic voting formula seemed sort of “underdetermined” and left room for “post processing”, but I read this as the “designer” wasn’t confident and leaving room to get well behaved results (as opposed schemes of outright manipulation).

2Charles He3d
Uh, I spent 45 seconds looking at this, but it looks like the final determinative score was created by doubling the>1000 karma weighted votes score and adding it to the <1000 karma weighted vote score. The above thought might be noise and not what you’re talking about (but this is because the voting formula is admittedly convoluted and not super clearly documented, it reads like quadratic voting passed through a few different hands without a clear owner).
The biggest risk of free-spending EA is not optics or motivated cognition, but grift

This specific story doesn’t seem to describe the greatest model of EA donors or political influence (it doesn’t seem like EA donors are that pliable or comfortable with politics, and the idea probably boils down to lobbying with extra steps or something).

But the thought seems true?

It seems worth imagining that, the minor media cycle around the recent candidate and other spending could create useful interest. For example, it could get sober attention and policy wonks to talk to EAs.

EA and the current funding situation

I think the person involved is either having a specific negative personal incident, or revealing latent personality traits that suggest the situation is much less promising and below a reasonable bar for skilled intervention in a conversation.

With an willingness to be wrong and ignore norms, I think I could elaborate or make informative comments (maybe relevant of trust, scaling and dilution that seem to be major topics right now?). But it feels distasteful and inhumane to do this to one individual who is not an EA.

(I think EAs can and should endure much more, directly and publicly, and this seems like it would address would be problems with trust and scaling).

The biggest risk of free-spending EA is not optics or motivated cognition, but grift

This post seems to give very low consideration to models of good management or leadership where good values, culture and people flow from outward from a strong center to the movement.

Even if you were entirely pessimistic about newer people, like myself, there’s a large pool of longtime EAs inside or known by these established EA orgs. These people have been close to or deployed large amounts of money for many years.

It seems plausible mundane and normal leadership and hiring by existing institutions could scale up orgs many times with modest dilution in values and alignment, and very little outright “grift”.

EA and the current funding situation

The “misrepresentation” about a search for funding was related for monies for a personal project or org to develop the intervention.

The second paragraph was for funding for the intervention itself.

They are really different things. Like the difference between an org researching food aid, versus buying billions of dollars of actual food.

I doubt the person believes he can literally stop hurricanes without government funding.

Unfortunately, I think you are muddying the waters in your intervention. With my read of the relevant person, this might not serve them well.

EA and the current funding situation

I don’t feel good about this situation, but I think your judgement is really different than most reads of what happened:

  • It’s clear to me that there’s someone who isn’t communicating or creating beliefs in a way that would be workable. Chris Leong’s comments seem objectively correct (if not likely to be useful).
  • (While committing this sin with this comment itself) It’s clearly better to walk away and leave them alone than risk stirring up another round of issues.
4Chris Leong3d
My comment very well may not be useful. I think there's value in experimenting with different ways of engaging with people. I think it is possible to have these kind of conversations but I don't think that I've quite managed to figure out how to do that yet.
EA will likely get more attention soon

I’m assuming and hoping Julia Wise or the respective team here has strong and adequate staffing.

I’ve got this worrying mental picture of Wise carrying both the community health and international public relations team as a one woman show, like with a headset, three keyboards and seven monitors typing furiously.

Honestly, I also low key want there to be strong people working for Wise, so we can refer to the resulting apparatus with awesome names:

  • Department of The Wise
  • The Wise Empire
  • The Era of Wise EA
  • Wisely, EA succeeds

It's definitely a bigger job than I can do on my own! As I said, staff at several organizations plus a communications advising firm are working on this.

We're also keeping an eye out for possible hires who are familiar with both media/communications work and EA. If that sounds like you, feel free to let me know (julia.wise@centreforeffectivealtruism.org) and I can let you know if we have a job posting.

Bad Omens in Current Community Building

In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity.

The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.

If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause... (read more)

EA and the current funding situation

You’ve responded with hostility and intense frustration to Linch and Khorton, who are goofy, but well meaning people. That’s really bad and you should stop writing like this. (EDIT: also Jeff Kaufman).

(Note that I suspect there something unseemly about my personal conduct in replying to you. To myself, in my head, I think I am doing it because it provides useful information to onlookers, because this would be mansplaining in other circumstances. I need to think about this.)

The brutal truth is that “specialist” access is sort of like gold. I and most peopl... (read more)

-2Anthony Repetto4d
It's also telling that, though I pointed-out how you sought to use "repeated posting" as a proxy for my "powerlessness and vulnerability...lack of effectiveness", you made no mention of it, afterwards. Judging someone on such shallow evidence is the opposite of skeptical inquiry; it doesn't bode well for your own effectiveness. Am I being hostile when I say that to you, while you are NOT hostile, when you say it to me, first?
-2Anthony Repetto4d
When I am repeatedly misrepresented, and no one who does so responds with an apology, I am supposed to adhere to your standards of dialogue? Why are my standards not respected, first? If specialist access is gold, then what do I need to pay them? I'll figure funding separately - who, and how much? Exploratory work is great - yet, as Jeff was saying in this exact thread's original post - EA needs to be willing to take the leap on risky new ideas. That was, also, the part of his post that I quoted, in my original response. Do you see how they are related to what we are talking about? Perhaps EA should take a risk, and connect me to a specialist, and if EA thinks that specialist should be paid, I'll work that out, next.
EA and the current funding situation

(Here is a brutal answer but maybe helpful at this point.)

I haven’t read every comment of yours but my sense is that you are frustrated that no one has engaged with your idea.

One issue with this sentiment is that there is little or nothing in EA that is like “a fully general engine” for taking someone’s paragraphs of thoughts and executing on it. (This isn’t true but complicated/political to explain).

EA provides a lot of resources, but this takes some leg work and demonstration to get going. This price of entry is a good thing. There is a lot of horsepower... (read more)

4Anthony Repetto4d
I am frustrated that I am repeatedly misrepresented, which is what I said in my responses. I am not frustrated by a lack of "people doing leg work for me". I am specifically asking if anyone has connections toward the relevant specialists, so that I can talk to those specialists. I'm not sure why that would be "something I should do on my own" - I'm literally reaching out to gather specialists, which is the first leg work, obviously. Re-inventing the wheel to impress an audience by "going it alone" is actually counter-productive. I don't need a "fully general engine" - you are misrepresenting my request, as others have. I am asking if anyone knows someone with the relevant background. I am NOT asking for funding, nor a general protocol that addresses every post. Those are strawmen. No one has apologized for these strawmen; they just ghost the conversation. And, if you are using the fact that I stood-up to repeated mis-representations as "telegraph a sense of powerlessness and sometimes vulnerability", and as a result, I should not be taken seriously, then you are squashing the only means of recourse available to me. When my request is repeatedly mis-represented, and I respond to each of them, I am necessarily "repeatedly posting" - I'm not sure why that noisy proxy for "lack of effectiveness" is a better signal for you than actually reading what I wrote.
Why Helping the Flynn Campaign is especially useful right now

For anyone else reading this, including full on partisan and political policy people—I think EA and everyone would welcome detailed, policy like discussion on pandemic preparation.

You can do this even if it (highly) unfavorable to the candidate. That is the nature of EA.

One major opportunity with this press and money is that someone could use attention to create a virtuous cycle of actual policy discussion (as opposed to too much discussion about owls or gotchas).

A real convincing thread here about improving policy in pandemics that satisfies the EA would ... (read more)

Why Helping the Flynn Campaign is especially useful right now

Nothing you wrote was bad. In fact it was fantastic.

I think you could use your real name and that seems very low cost.

The one issue on substance is that I wish you could have delved into more, was engaging about the long, high effort comments that was made about pandemic prevention, which isn’t the same as covid response. Especially not just saying that an incremental package from another candidate was comparable.

There is such a world of difference between pandemic prevention and another covid response package—that difference reflects how you could influen... (read more)

8Charles He5d
For anyone else reading this, including full on partisan and political policy people—I think EA and everyone would welcome detailed, policy like discussion on pandemic preparation. You can do this even if it (highly) unfavorable to the candidate. That is the nature of EA. One major opportunity with this press and money is that someone could use attention to create a virtuous cycle of actual policy discussion (as opposed to too much discussion about owls or gotchas). A real convincing thread here about improving policy in pandemics that satisfies the EA would very possibly unlock principled political funding that protects Americans. If you really had the knowledge, many people would navigate you through the silly EA terminology and habits.
EA and the current funding situation

(Disclaimer: I don’t know if my support or comment has value, no one likes me and I have poor hygiene.)

People I know interacted with this person in real life in a professional context. From these interactions, what the commentor is saying seems accurate.

In this particular instance, it seems like a valuable and talented leader could have been funded to do good, well aligned EA work that they were deeply passionate about.

9Ivy_Mazzola5d
Thanks for your kind words. Most people have been surprised which has been affirming (much needed because rejections are the opposite). I got some in-person feedback suggesting EAIF saw risks to doing Austin CB too soon or with the wrong person (ouch). I'm sure lots of people submit actually-risky projects who simply can't see them as risky (or themselves as risky agents), so take my confusion with a grain of salt. The fund managers are people I genuinely respect. I'm just concerned that it was beaurecrat's curse which is also modus operandi for non-uni CB all around. EA has some bottlenecks that early or mid-career professionals are better suited to fill than students. So I don't want non-uni groups to be unhelpfully neglected.
Why Helping the Flynn Campaign is especially useful right now

Thinking behind my comment above or why you should care (something something minimal trust investigations)—I’m not saying I’m Correct, but this is how I sort of think so if this is terrible someone should stop me.

(I’m on mobile so formatting is weak.)

The comment pattern satisfies noticeable patterns for me that suggest a lot of practice or intent.

— Something noticeable is the how they repeat certain critiques in a way that is superficial. They do this in such a way I doubt their writing could be the full story behind this person’s views, or another plausib... (read more)

9_pk5d
Ouch, was I really that bad? I’m gonna retract the parent comment and didn’t mean to raise questions about my motivations. (I think you’re suggesting I might be a McLeod-Skinner secret agent? I’m flattered, I think). For what it’s worth, I have no connection to her campaign, have never met her, and am actually not even a donor in her current race (I donated to her first campaign a few years ago). I was simply trying to provide an alternative, since I think you all are mis-spending kind of a lot of money. For the amount being spent in OR-6, you could have had a significant influence on a bunch of those.
Why Helping the Flynn Campaign is especially useful right now

I know some Oregonians too and I think they find the use of money in all politics deplorable and would be fine or pretty happy with the candidate mentioned in the original post.

Do you mind telling us who you are and what your relationship with this OR-5 candidate is?

I’m not sure every person here understands how many forum accounts/time/erudite writing a full $2,900 political donation could produce.

2Charles He6d
Thinking behind my comment above or why you should care (something something minimal trust investigations)—I’m not saying I’m Correct, but this is how I sort of think so if this is terrible someone should stop me. (I’m on mobile so formatting is weak.) The comment pattern satisfies noticeable patterns for me that suggest a lot of practice or intent. — Something noticeable is the how they repeat certain critiques in a way that is superficial. They do this in such a way I doubt their writing could be the full story behind this person’s views, or another plausible explanation is that their view is shallow and they found this content to fill out the comment. Either of these is less consistent with how I expect most concerned people to pop onto the forum to talk about OR-6. It suggests cultivation. — Broadly, opinions and views repeat patterns and ideas that come from elsewhere. — Their messaging across comments is pretty tight and goes through a progression I find deliberate . (I was born in Oregon and can see OR-6, have a favourable view of you guys) but then moves into critiques of funding later on, and then moves into what I would consider outright rhetoric and a leading point (is this the best way to spend your money?), which then seats the position for an ask. — Note that the later content gives a funny characterization of EA (the CEA reference is not the biggest issue), this isn’t deceptive but it is consistent with learning enough about EA to make this comment (eg no real prior interest). I don’t think this is evil or anything. The person is just trying to support someone they care about. The level of sophistication here is like at the level of any experienced campaigner. My friends uses similar level of sophistication for buying a used car for example. Other points: — EAs focus on messaging and branding and have historic sensitivities. I think it would benefit EAs to know how sensitivities are being broadcast and their effect. — Many political races h
EA and the current funding situation

I'm not sure, but in situations where this sort of dynamic or resource gradient happens, this isn't resolved by the high gradient stopping (people don't stop funding or founding institutions), because the original money is driven by underlying forces that is really strong. My guess is that a lot of this would be counter productive.

Typically in those situations, I think the best path is moderation and focusing on development and culture in other cause areas.

EA and the current funding situation

Next-level Next-level

Now distinct from the above comment, there’s a whole other reference class of spending where:

  1. People can get an amount of cash that is a large fraction of all spending in an existing EA cause area in one raise.
  2. The internal environment is largely "deep tech" or not related to customers or operations

So I'm thinking about valuations in the 2010- tech sector for trendy companies. 

I'm not sure, but my model of organizations that can raise 8 figures per person in a series B, for spending that is pretty much purely CapEx (as opposed to ca... (read more)

2Charles He6d
I'm not sure, but in situations where this sort of dynamic or resource gradient happens, this isn't resolved by the high gradient stopping (people don't stop funding or founding institutions), because the original money is driven by underlying forces that is really strong. My guess is that a lot of this would be counter productive. Typically in those situations, I think the best path is moderation and focusing on development and culture in other cause areas.
EA and the current funding situation

To onlookers: There’s a often a low amount of resolution and expertise in some comments and concerns on the LW and EAF, and this creates “bycatch” and reduces clarity. With uncertainty, I'll lay out one story that seems like it matches the concerns in the parent comment.
 

Strong Spending

I'm not entirely sure this is correct, but for large EA spending, I usually think of the following:

  • 30%-70% growth in head count in established institutions, sustained for multiple years
  • Near six figure salaries for junior talent, and well over six figure salaries for ver
... (read more)
2Charles He6d
NEXT-LEVEL NEXT-LEVEL Now distinct from the above comment, there’s a whole other reference class of spending where: 1. People can get an amount of cash that is a large fraction of all spending in an existing EA cause area in one raise. 2. The internal environment is largely "deep tech" or not related to customers or operations So I'm thinking about valuations in the 2010- tech sector for trendy companies. I'm not sure, but my model of organizations that can raise 8 figures per person in a series B, for spending that is pretty much purely CapEx (as opposed to capital to support operations or lower margin activity, e.g. inventory, logistics) has internal activity that is really, really different than the above "high" spending in the above comment. There's issues here, that are hard to appreciate. So Facebook's raises were really hot and oversubscribed. But building the company was a drama fest for the founders, and also there was a nuclear reactor hot business with viral growth. So that's epic fires to put out every week, customers and partners, actual scaling issues of hockey stick growth (not this meta-business advice discussion on the forum). It's a mess. So CEO and even junior people have to deal. But once you're just raising that amount in deep tech mode, my guesses for how people think, feel, and behave inside of that company with valuations in the 8-9 figures per person. My guess is that the attractiveness, incentives and beliefs in that environment, are really different than even the hottest startups, even above those where junior people exit with 7 figures of income. To be concrete, the issues on the rest of EA might be that: * Even strong EA CEOs won’t be able to hire many EA talent like software developers (but they should be worried about hiring pretty much anyone really). If they hire, they won't be able to keep them at comfortable, above EA salaries, without worrying about attrition. * Every person who can convincingly cl
EA and the current funding situation

I was going to write an elaborate rebuttal of the parent comment. 

In that rebuttal, I was going to say there's a striking lack of confidence. The concerns seems like a pretty broad argument against building any business or non-profit organization with a virtuous culture. There's many counterexamples against this argument—and most have the additional burden of balancing that growth while tackling existential issues like funding.

It's also curious that corruption and unwieldly growth has to set in exactly now, versus say with the $8B in 2019.

 

I don'

... (read more)
EA and the current funding situation

I don't know how you can be confident that an imagined fake charity that disrupts medical service or food supply would ever be large enough to equal scale the harms caused by some of the most powerful global corporations.

But we're talking about the relative harm of a bad new charity compared to a harmful business. 

I think you agree it doesn't make sense to compare the effect of our new charity versus, literally all of capitalism or a major global corporation. 

 

But equally that billionaire could found an institution focused on turning a profi

... (read more)
EA and the current funding situation

Well, I've actually sort of slipped into another argument about scale and relative harm, and got you to talk about that. 

But that doesn't respond to your original point, that businesses can do huge harm and EA needs to account for that. So that's unfair to you.

Trying to answer your point, and using your view about explicitly weighing and balancing harms, there's another point about "counterfactual harm" that responds to a lot of your concerns. 

In the case of a crypto currency company:

 

If you make a new make crypto company, and become success... (read more)

EA and the current funding situation

The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm

You're absolutely right. For profits absolutely do harm. In general "capitalism has really huge harms", almost every EA or reader here would agree (note that I'm not an EA necessarily or represent EA thought).

The scale is the point here—you're also exactly right. For many activities, it makes many, many millions to create a situation... (read more)

-5guyi6d
2Charles He6d
Well, I've actually sort of slipped into another argument about scale and relative harm, and got you to talk about that. But that doesn't respond to your original point, that businesses can do huge harm and EA needs to account for that. So that's unfair to you. Trying to answer your point, and using your view about explicitly weighing and balancing harms, there's another point about "counterfactual harm" that responds to a lot of your concerns. In the case of a crypto currency company: If you make a new make crypto company, and become successful by operating a new exchange, even if you become the world's biggest exchange, it's unclear how much that actually caused any more mining (e.g. by increasing Bitcoin's price). There's dozens of exchanges already, besides the one you created. So it's not true that you can assign or attribute 20% or 50% of emissions to money, just from association. In reality, I think it's reasonable that the effect is small, so even if the top #1 trading platform wasn't founded, almost the same amount of mining would occur. (If you track cryptocurrency prices, it seems plausible that no one cares that much about the quality of exchanges). So the money that would have gone to your platform and been donated to charity, would buy yachts for someone else instead. (By the way—as part of your crypto currency company, if you make and promote a new cryptocurrency that doesn't mine, and "stakes" instead, then your cryptocurrency company might accelerate the transition to "staking", which doesn't produce greenhouse gasses like mining. Your contribution to greenhouse gasses is negative despite being a crypto company. But I share the sentiment that you can totally roll your eyes at this idea, let's just leave this point here.) You mentioned other concerns about other companies. I think it's too difficult for me to respond, for reasons that aren't related to the merit of the concern.
EA and the current funding situation

I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. 

There's several extremely bad outcomes of bad charities:

  • One example is in the footnotes and where people actually died[1].
  • Another famous example is a pumping system for developing countries that consumed donor money and actively made it more difficult to get water.

It's not clear anything would have stopped these people besides their own virtue or self awareness, or some kind of... (read more)

1guyi7d
My point wasn't that charities are incapable of doing harm. There are many examples of charities doing harm as you point out. The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm. Yes, most successful business are primarily focused on making profit rather doing good or harm. But this doesn't mean they aren't willing to do harm in the pursuit of profit! If someone dedicates an amount of money to business that then grows it large enough to do lots of harm, even if as a side-effect, it's quite conceivable they could accomplish more total harm than someone simply dedicating that same money to a directly harmful (but not profitable) venture.
EA Forum feature suggestion thread

See this comment.

 

This pattern of broken link, where the intended link is appended to another, distinct URL, has appeared in many comments or posts. 

This defect seems common enough that it seems to justify investigation of the root cause (or even very crude automatic fix) especially since the pattern in the defect is so simple. 

Linch's Shortform

Disclaimer: My original goal was to open up a thread relevant for many EAs, and understand the world better. I think your content is interesting, but some aspects of this discussion that I am continuing are (way) more personal than I intended—and maybe discussion is at your personal cost, so feel free not to answer.

 

I will trade 25% lifetime consumption [in exchange for] 

a) [being] implicitly read to others as Caucasian... 

b) [having a better] relationship with my family that's close to that of a median EA.

This seems really important to you.... (read more)

8Linch7d
I mean to be clear I don't know if I'm right! But for the relatively offhand estimate, I'm mostly thinking about career options outside of institutional EA. E.g. politics or media. I do basically think things like looks and accent matter enough to give you something like a 2-3x factor change in probability x magnitude of success from race alone, probably more (I haven't looked at the stats closely but I suspect you empirically see roughly this level of underrepresentation of Asian men in politics and media, when naively based on other characteristics like educational level or wealth you'd expect higher representation). Likewise for the -.3x I was thinking about losing the option value of doing stuff in China, rather than things within institutional EA. I do agree that it's be plausible that it's easier to advance as an ethnic minority within EA, which cuts against my general point about impact. Main consideration against is that pro-Asian discrimination will look and feel explicit (e.g. "we don't want 10 white people in the front page of our AI company," "we need someone who speaks Chinese for our AI governance/international cooperation research to go better" ), whereas anti-Asian discrimination will be pretty subtle and most people won't even read it as such (e.g. judging how much people "vibe" with you in conversations/at parties to select cofounders over e.g. online output or past work performance, relying on academic credentials that are implicitly racist over direct cognitive or personality tests for hiring). But I do think it's more likely that the net effect within EA is biased in my favor rather than against, with high uncertainty.
Some clarifications on the Future Fund's approach to grantmaking

It seems many of the downsides of giving feedback would also apply to this.

I think lower resolution feedback introduces new issues too. For example, people might become aware of the schema and over-index on getting a "1. Reject" versus getting a "2. Revise and resubmit".

 

A major consideration is that I think some models of very strong projects and founders says that these people wouldn't be harmed by rejections. 

Further considerations related to this (that are a little sensitive) is that there are other ways of getting feedback, and that extremel... (read more)

EA and the current funding situation

I think your comment and sentiment is great. My response wasn't directly related.

I guess I'm more concerned about "by catch" or overindexing. For example, activity and discussions that are wobbly about getting into management and scaling, "Great Leap Forward", sort of style.

 

Honestly, the root issue here is that I have some distrust related to the causes and processes about this post, the NB post, all of which seems to be related to discussion and concerns that might originated or closely involve the EA forum. I am don't think these have the best rela... (read more)

EA and the current funding situation

As a caution, onlookers should know that there tends to be a large supply of would-be management advice or scaling advice whose quality is often mixed. This is because:

  • It is attractive to supply because this advice is literally executive or senior managerial work, so appears high status/impact/compensation.
  • It is attractive to supply because it moves into organizations where often the hard operational work and important niches have been developed successfully. In reality, it is often this object level activity that is hard and in "short supply".
    • Even in succ
... (read more)
3Rob Mitchell7d
It's useful to separate out consultancy/advice-giving versus the actual doing. I would say though that a successful management/operations setup should be able to at least ameliorate the feedback issue you mention (e.g. by identifying leading and/or more quickly changing metrics that are aligned and gaining value from these).
Some clarifications on the Future Fund's approach to grantmaking

Yes, this is fair. The current vote score seems a little harsh though.

Anyways, I just got off a call with a collaborator who was also very excited about my comment—something about “billionaire” and “great doom”.

Yes, strong funding for x-risk is important, but in my opinion, I think there could be greater focus on high quality work more broadly.

oh wow, when I made the comment we were at -1 and +2 respectively, I agree this was a bigger reaction than I was expecting lol

Some clarifications on the Future Fund's approach to grantmaking

This seems slightly cryptic. Have you considered following the style and norms of other comments on the forum?

although, to be frank, it does make me a bit confused where some of the consternation about specific, unspecified grants has come from...

If your comment is about public sentiment around FTX grant decisions, there doesn't seem to be public knowledge of grants actually made. So it doesn't seem like there could be informed public opinion of the actual results. 

(If you are signaling/referencing some private discussion, this point doesn’t apply.)

Weak downvote because "Have you considered following the style and norms of other comments on the forum?" is needlessly rude

An update in favor of trying to make tens of billions of dollars

No. You're off topic. 

As mentioned, there's a risk someone starts pattern matching this to some "dinner-party" style talks and start jousting about left/right/opportunity/libertarian/woke/privilege, what have you.

 

What we're talking about here is the reference class of creating 9-10 figures of wealth. 

I think if you look at the actual class of very high net worth tech people, there's evidence for the view in my parent comment.

1Bluefalcon8d
Then let's see it. I'm not pattern-matching to anything. You said a thing that is simply untrue about advantages you believe a person coming from a lower upper class background would have. I am directly challenging your purported method of action based on my own experience of how easy it is to acquire those same advantages. Maybe they have some other advantages you haven't identified. But if so, let's see it.
Load More