Shortform Content [Beta]

seanrson's Shortform

Local vs. global optimization in career choice

Like many young people in the EA community, I often find myself paralyzed by career planning and am quick to second-guess my current path, developing an unhealthy obsession for keeping doors open in case I realize that I really should have done this other thing.

Many posts have been written recently about the pitfalls of planning your career as if you were some generic template to be molded by 80,000 Hours [reference Holden's aptitudes post, etc.]. I'm still trying to process these ideas and think that the disti... (read more)

Buck's Shortform

Redwood Research is looking for people to help us find flaws in our injury-detecting model. We'll pay $20/hour for this, for up to 2 hours; after that, if you’ve found interesting stuff, we’ll pay you for more of this work. We might stop doing this (or stop recruiting additional people) in a few days.

If you’re interested, please email adam@rdwrs.com so he can add you to a Slack channel with other people who are working on this. This might be a fun task for people who like being creative, being tricky, and figuring out how language models understand languag... (read more)

Emrik's Shortform

Correct me if I'm wrong, but I think in Christianity, there's a lot of respect and positive affect for the "ordinary believer". Christians who identify as "ordinary Christians" feel good about themselves for that fact. You don't have to be among the brightest stars of the community in order to feel like you belong.

I think in EA, we're extremely kind, but we somehow have less of this. Like, unless you have 2 PhD's by the age of 25 and you're able to hold your own in a conversation about AI-alignment theory with the top researchers in the world... you sadly ... (read more)

4Aaron Gertler3dMy experience as a non-PhD who dropped out of EA things for two years before returning is that I felt welcome and accepted when I started showing up in EA spaces again. And now that I've been at CEA for three years, I still spend a lot of my time talking to and helping out people who are just getting started and don't have any great credentials or accomplishments; I hope that I'm not putting pressure on them when I do this. That said, every person's experience is unique, and some people have certainly felt this kind of pressure, whether self-imposed as a result of perceived community norms or thrust upon them by people who were rude or dismissive at some point. And that's clearly awful — people shouldn't be made to feel this way in general, and it's especially galling to hear about it sometimes happening within EA. My impression is that few of these rude or dismissive people are themselves highly invested in the community, but my impression may be skewed by the relationships I've built with various highly invested people in the job I now have. Lots of people with pretty normal backgrounds have clearly had enormous impact (too many examples to list!). And within the EA spaces I frequent, there's a lot of interest and excitement about people sharing their stories of joining the movement, even if those people don't have any special credentials. The most prominent example of this might be Giving What We Can [https://forum.effectivealtruism.org/posts/mFf5Th9BvexWqRyPE/other-comments-that-make-my-day-what-people-have-said-when] . I don't understand the "menial labor" point; the most common jobs for people in the broader EA community are very white-collar (programmers, lawyers, teachers...) What did you mean by that? Personally, the way I view "ordinary folk dignity" in EA is through something I call "the airplane test". If I sat next to someone on an airplane and saw them reading Doing Good Better, and they seemed excited about EA when I talked to them, I'd be ve

And to respond to your question about what I meant by "menial labour". I was being poetic. I just mean that I feel like EA places a lot of focus on the very most high-status jobs, and I've heard friends despairing for having to "settle" for anything less. I sense that this type of writing might not be the norm for EA shortform, but I wasn't sure.

8Emrik3dNono, I'm not trying to point to a problem of EAs trying to make others feel unwelcome or dumb. I think EA is extremely kind, and almost universally tries hard to make people feel welcome. I'm just pointing to the existence of an unusually strong intellectual pressure, perhaps combined with lots of focus on world-saving heroes and talk about "what should talented people do?" I think ambition is good, but I think we can find ways of encouraging ambition while also mitigating at least some of the debilitating intelligence-dysphoria many in our community suffer from. I'm writing this in reaction to talking to three of my friends who suffer under the intellectual pressure they feel. (Note that the following are all about the intellectual pressure they get from EA, and not just in general due to academic life.) Friend1: "EA makes me feel real dumb XD i think i feel out of place by being less intelligent" _ Friend2: "I’m not worried that I’m not smart, but I am worried that I am not smart enough to meet a certain threshold that is required for me to do the things I want to do. ... I think I have very low odds of achieving things I deeply want to achieve. I think that is at least partially responsible for me being as extremely uncomfortable about my intelligence as I am, and not being able to snap out of it." _ Me: "Do you ever refrain from trying to contribute intellectually because you worry about taking up more attention than it's worth?" Friend3: "hmm, not really for that reason. because I'm afraid my contribution will be wrong or make me look stupid. wrong in a way that reflects negatively on me-- stupid errors, revealing intellectual or character weakness. _ Some of this is a natural and unavoidable result of the large focus EA places on intellectual labour, but I think it's worse than it needs to be. I think some effort to instil some "ordinary EA dignity" into our culture wouldn't hurt. I might have a skewed sample, however.
evelynciara's Shortform

I think an EA career fair would be a good idea. It could have EA orgs as well as non-EA orgs that are relevant to EAs (for gaining career capital or earning to give)

EA Global normally has an EA career fair, or something similar

sbowman's Shortform

Naïve question: What's the deal with the cheapest CO2 offset prices?

It seems, though, that the current price of credible offsets is much lower than the social cost of carbon, and possibly so low that just buying offsets starts to look competitive with GiveWell top charities.

I'm not an expert on this. (I run an offsetting program for a small organization, but that takes about 4h/year. Otherwise I don't think about this much.) I'm also not anywhere near advocating that we should sink tons of money into offsets. But this observation strikes me as unintuitive ... (read more)

One project that popped up last year involved converting the operations of a platinum mining company in Bihar from burning coal to another burning another fossil fuel in a slightly-lower-emissions way. That's easy to verify, and there was a clear argument for why it wouldn't make economic sense for them to transition without the offset money

I am also confused about the general question, but I found this intervention interesting to think about. It seems like the legitimacy of this comes down to the elasticity of demand for coal in India (basically, if someo... (read more)

MichaelA's Shortform

Collection of collections of resources relevant to (research) management, mentorship, training, etc.

(See the linked doc for the most up-to-date version of this.)

The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.

... (read more)
MichaelA's Shortform

Collection of AI governance reading lists, syllabi, etc. 

This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:


What is this doc, and why did I make it?

AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)

Puggy's Shortform

I think it is a cool idea for people to take a giving pledge on the same day. For example, you and your friend both decide to pledge 10% to charity on the same day. It would be even more fun if you did it with strangers. Call it “giving twins” or “giving siblings”.

Imagine that you met a couple strangers and they pledged with you. Imagine that after pledging you all just decided to be friends, or at least a type of support group for one another. Like “Hey, you and I took the further pledge together on New Year’s Day last year. When I’m in your city let’s g... (read more)

1jwilson0168ddoes giving sibling mean giving somethin g to each other?

That could be the case, but I think the emphasis is more on the idea that you have the same “birthdate” to be considered a giving sibling.

Like on February 15 you and a friend took the Giving Pledge together and then that date was the same day you became siblings. Then you celebrate that day every year or form a bond around this shared experience.

jwilson016's Shortform

Hi, I am working on a non profit to help animals in another country by creating a sanctuary for them. I already know how to  setup a corporation, convert it to a non profit, and operate it with a board of directors. For this project, I will be opening a US non-profit and using funds to help animals in other countries.

I am looking for additional guides on how to establish a non-profit organization in another country and wanted to know if there is anything different about running a US non profit that does work in another country and if the process is any different when operating only in the US. 

Fergus McCormack's Shortform

I wrote a very rough draft of an idea I had. It was just a stream of consciousness and I didn't really edit it. I'm not sure what the standards are like on the EA Forum: I would like to invest time in developing this further, as well as other posts I could possibly write, but as I mentioned at the bottom of this post, I'm at a critical juncture in my career and need to invest my time and energy elsewhere.

If I can produce forum posts with potentially some interesting ideas, but that are of a relatively low standard, is is better for me to post these without... (read more)

In answer to your question, I think that it's generally better to create a top-level post than a Shortform post, as long as you're comfortable doing so. Shortform serves a useful purpose, but top-level posts have a better chance of getting useful engagement.

I think this would make sense as a top-level post, and would encourage you to try sharing it in that context!

My thoughts on the checklist compilation idea: It's hard to make a given intellectual resource very popular, but if you can pull off this one, I think it could be really useful. Many EA orgs have... (read more)

Linch's Shortform

Red teaming papers as an EA training exercise?

I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important. 

I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the t... (read more)

Showing 3 of 20 replies (Click to show all)

This is another example of a Shortform that could be an excellent top-level post (especially as it's on-theme with the motivated reasoning post that was just published). I'd love to  see see this spend a week on the front page and perhaps convince some readers to try doing some red-teaming for themselves. Would you consider creating a post?

3reallyeli2moThis idea sounds really cool. Brainstorming: a variant could be several people red teaming the same paper and not conferring until the end.
5Khorton3moIt's actually a bit of numbers 1-3; I'm imagining decreased engagement generally, especially sharing ideas transparently.
Buck's Shortform

When I was 19, I moved to San Francisco to do a coding bootcamp. I got a bunch better at Ruby programming and also learned a bunch of web technologies (SQL, Rails, JavaScript, etc).

It was a great experience for me, for a bunch of reasons.

  • I got a bunch better at programming and web development.
    • It was a great learning environment for me. We spent basically all day pair programming, which makes it really easy to stay motivated and engaged. And we had homework and readings in the evenings and weekends. I was living in the office at the time, with a bunch o
... (read more)

See my comment here, which applies to this Shortform as well; I think it would be a strong top-level post, and I'd be interested to see how other users felt about tech bootcamps they attended.

1Jack R25dThis seems like really good advice, thanks for writing this! Also, I'm compiling a list of CS/ML bootcamps here [https://docs.google.com/spreadsheets/d/1pBBo28bCNVlKvmrzbSkkl2pQKDf_els-98i-S0Gdu6A/edit?usp=sharing ] (anyone should feel free to add items).
Buck's Shortform

Doing lots of good vs getting really rich

Here in the EA community, we’re trying to do lots of good. Recently I’ve been thinking about the similarities and differences between a community focused on doing lots of good and a community focused on getting really rich.

I think this is interesting for a few reasons:

  • I found it clarifying to articulate the main differences between how we should behave and how the wealth-seeking community should behave.
  • I think that EAs make mistakes that you can notice by thinking about how the wealth-seeking community would beh
... (read more)
Showing 3 of 9 replies (Click to show all)

I'm commenting on a few Shortforms I think should be top-level posts so that more people see them, they can be tagged, etc. This is one of the clearest cases I've seen; I think the comparison is really interesting, and a lot of people who are promising EA candidates will have "become really rich" as a viable option, such that they'd benefit especially from thinking about this comparisons themselves.

Anyway, would you consider making this a top-level post? I don't think the text would need to be edited all — it could be as-is, plus a link to the Shortform comments.

14Charles He17dYou seem to be wise and thoughtful, but I don't understand the premise of this question or this belief: But the reasoning [that existing orgs are often poor at rewarding/supporting/fostering new (extraordinary) leadership] seems to apply: For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement. I don't think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. "focus"). It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/participants in the org. Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them. It seems that the now canonized posts Really Hard [https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really] and Denise Melchin's experiences [https://forum.effectivealtruism.org/posts/QFa92ZKtGp7sckRTR/my-mistakes-on-the-path-to-impact] suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/could be highly impactful in EA and do not "deserve" the experiences their described. [I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesn't seem plausible/acceptable to me. Note that currently, there is a concerted effort to fost
3Ben_West17dThanks! "EA organizations are bad" is a reasonable answer. (In contrast, "for-profit organizations are bad" doesn't seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isn't something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)
Ben Garfinkel's Shortform

A thought on how we describe existential risks from misaligned AI:

Sometimes discussions focus on a fairly specific version of AI risk, which involves humanity being quickly wiped out. Increasingly, though, the emphasis seems to be on the more abstract idea of “humanity losing control of its future.” I think it might be worthwhile to unpack this latter idea a bit more.

There’s already a fairly strong sense in which humanity has never controlled its own future. For example, looking back ten thousand years, no one decided that the sedentary agriculture would i... (read more)

Showing 3 of 13 replies (Click to show all)

Would you consider making this into a top-level post? The discussion here is really interesting and could use more attention, and a top-level post helps to deliver that (this also means the post can be tagged for greater searchability).

I think the top-level post could be exactly the text here, plus a link to the Shortform version so people can see those comments. Though I'd also be interested to see the updated version of the original post which takes comments into account (if you felt like doing that).

2Ben Garfinkel3moMostly the former! I think the point may have implications for how much we should prioritize alignment research, relative to other kinds of work, but this depends on what the previous version of someone's world model was. For example, if someone has assumed that solving the 'alignment problem' is close to sufficient to ensure that humanity has "control" of its future, then absorbing this point (if it's correct) might cause them to update downward on the expected impact of technical alignment research. Research focused on coordination-related issues (e.g. cooperative AI stuff) might increase in value, at least in relative terms.
9Max_Daniel3moI agree with most of what you say here. [ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment [https://forum.effectivealtruism.org/posts/kLYD95SK8tQFRmw4T/ben-garfinkel-s-shortform?commentId=dG9Xr8D44Sb7zBHPh] .] I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3. "Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to seize this opportunity, there also won't be any other." Of course, AI being our first and only opportunity to take control of the future is a strictly stronger claim than AI being one such opportunity. And so it must be less likely. But my impression is that the stronger claim is sufficiently more important that it could be justified to basically 'wager' most AI risk work on it being true.
Linch's Shortform

Recently I was asked for tips on how to be less captured by motivated reasoning and related biases, a goal/quest I've slowly made progress on for the last 6+ years. I don't think I'm very good at this, but I do think I'm likely above average, and it's also something I aspire to be better at. So here is a non-exhaustive and somewhat overlapping list of things that I think are helpful:

... (read more)

I've previously shared this post on CEA's social media and (I think) in an edition of the Forum Digest. I think it's really good, and I'd love to see it be a top-level post so that more people end up seeing it, it can be tagged, etc. 

Would you be interested in creating a full post for it? (I don't think you'd have to make any changes — this still deserves to be read widely as-is.)

Puggy's Shortform

Here’s the problem:

Some charities are not just multiple times better than others, some are thousands of times better than others. But as far as I can tell, we haven’t got a good way of signaling to others what this means.

Think about when Ed Sheeran sells an album. It’s “certified platinum” then “double platinum” peaking at “certified diamond”. When people hear this it makes them sit back and say “wow, Ed sheeran is on a different level.”

When a football player is about to announce his college, he says “I’m going D1”. You become a “grandmaster” at chess. Ah... (read more)

2Harrison D9d"But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it" - I think this brushes a good point which I'd like to see fleshed out more. On some level I'm still a bit skeptical in part because I think it's more difficult to make these kinds of designations/measurements for charities whereas things like album statuses are very objective (i.e., a specific number of purchases/downloads) and in some cases easier to measure. Additionally, for some of those cases there is a well-established and influential organization making the determination (e.g., football leagues, FIDE for chess). I definitely think something could be done for traditional charities (e.g., global health and poverty alleviation), but it would very likely be difficult for many other charities, and it still would probably not be as widely recognized as most of the things you mentioned.
1Puggy8dGreat points. Thank you for them. Perhaps we could use a DALY/QALY measure. A charity could reach the highest status if, after randomized controlled studies, it was determined that $10 donated could give one QALY to a human (I’m making up numbers). Any charity that reached this hard to achieve threshold would be given the super-charity designation. To make it official imagine that there’s a committee or governing body formed between charity navigator and GiveWell. 5 board members from each charity would come together and select the charities then announce the award once a year and the status would only be official for a certain amount of time or it could be removed if they dipped below a threshold. What do you think

I certainly would be interested in seeing such a system go into place—I think it would probably be beneficial—the main issue is just whether something like that is likely to happen. For example, it might be quite difficult to establish agreement between Charity Evaluator and GiveWell when it comes to the benefits of certain charities. Additionally, there may be a bit of survivor bias when it comes to organizations that have worked like FIDE, although I still think the main issue is 1) the analysis/measurement of effectiveness is difficult (requiring lots o... (read more)

Ozzie Gooen's Shortform

A few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches.

Lots of confident-looking pictures of people with fancy and impressive sounding projects.

I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused.

So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far th... (read more)

Relevant post by Nuño: https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers?fbclid=IwAR1M0zumAQ452iOAOVKGEcOdI4MwORfVSX4H1S2zLhyUXrWjarvUt31mKsg

Miranda_Zhang's Shortform

I know that carbon offsets (and effective climate giving) are a fairly common topic of discussion, but I've yet to see any thoughts on the newly-launched Climate Vault. It seems like a novel take on offsetting: your funds go to purchasing cap-and-trade permits which will then be sold to fund carbon dioxide removal (CDR).

I like it because it a) uses (and potentially improves upon) a flawed government program in a beneficial way, and b) I can both fund the limitation of carbon emissions and the removal, unlike other offsets which only do the latter.

However, ... (read more)

4jackva15d"How do you convert a permit into CO2 removal using CDR technologies without selling them back into the compliance market – in effect negating the offset? We will sell the permits back into the market, but only when we’re ready to use the proceeds to fund carbon removal projects equivalent to the number of permits we’re selling, or more. So, in effect, the permits going back onto the market are negated by the tons of carbon we are paying to remove." Once credible CDR is so cheap (now > USD 100/t, most approaches over USD 600, cf Stripe Climate) that this works (current carbon prices around USD 20), the value of additional CDR tech support is pretty low because the learning curve has already been brought down. Am I missing something? It seems like a good way to buy allowances which is, when the cap is fixed (also addressed in the FAQ, though not 100% convincingly) better than buying most offsets, but it seems unlikely to work in the way intended.

Hmm okay! Thanks so much for this. So I suppose the main uncertainties for me are

  • whether I trust that the cap will remain fixed
  • whether the cap-and-trade system is more effective than the offsets I was considering

Really appreciate you helping clarify this for me!

elifland's Shortform

I wrote a draft outline on bottlenecks to more impactful crowd forecasting that I decided to share in its current form rather than clean up into a post.

Link

Summary:

  1. I have some intuition that crowd forecasting could be a useful tool for important decisions like cause prioritization but feel uncertain
  2. I’m not aware of many example success stories of crowd forecasts impacting important decisions, so I define a simple framework for how crowd forecasts could be impactful:
    1. Organizations and individuals (stakeholders) making important decisions are willing to use c
... (read more)
Showing 3 of 5 replies (Click to show all)
2Aaron Gertler17dI liked this document quite a bit, and I think it would be a reasonable Forum post even without further cleanup — you could basically copy over this Shortform, minus the bit about not cleaning it up. This lets the post be tagged, be visible to more people, etc. (Though I understand if you'd rather leave it in a less-trafficked area.)

Appreciate the compliment. I am interested in making it a Forum post, but might want to do some more editing/cleanup or writing over next few weeks/months (it got more interest than I was expecting so seems more likely to be worth it now). Might also post as is, will think about it more soon.

3MichaelA19dFwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is: 1. try to gather up a bunch of disparate crappy inputs with different weaknesses 2. try to figure out how much weight to give each 3. see how much that converges on a single coherent picture and if so what picture (See also consilience [https://en.wikipedia.org/wiki/Consilience].) (I really appreciated your draft outline and left a bunch of comments there. Just jumping in here with one small point.)
jkmh's Shortform

I sometimes get frustrated when I hear someone trying to "read between the lines" of everything another person says or does. I get even more frustrated if I'm the one involved in this type of situation. It seems that non-rhetorical exploratory questions (e.g. "what problem is solved by person X doing action Y?") are often taken as rhetorical and accusatory (e.g. "person X is not solving a problem by doing action Y.")

I suppose a lot of it comes down to presentation and communication skills. If you communicate very well, people won't try as hard to read betw... (read more)

Load More