Shortform Content [Beta]

Aaron Gertler's Shortform

New EA music

José Gonzalez (GWWC member, EA Global performer, winner of a Swedish Grammy award) just released a new song inspired by EA and (maybe?) The Precipice.

Lyrics include:

Speak up
Stand down
Pick your battles
Look around
Reflect
Update
Pause your intuitions and deal with it

It's not as direct as the songs in the Rationalist Solstice, but it's more explicitly EA-vibey than anything I can remember from his (apparently) Peter Singer-inspired 2007 album, In Our Nature.

RyanCarey's Shortform

A case of precocious policy influence, and my pitch for more research on how to get a top policy job.

Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Colum... (read more)

My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.

18Larks13hWhat's especially interesting is that the one article that kick-started her career was, by truth-orientated standards, quite poor. For example, she suggested that Amazon was able to charge unprofitably low prices by selling equity/debt to raise more cash - but you only have to look at Amazon's accounts to see that they have been almost entirely self-financing for a long time. This is because Amazon has actually been cashflow positive, in contrast to the impression you would get from Khan's piece. (More detail on this and other problems here [https://truthonthemarket.com/2019/05/07/is-amazon-guilty-of-predatory-pricing/] ). Depressingly this suggests to me that a good strategy for gaining political power is to pick a growing, popular movement, become an extreme advocate of it, and trust that people will simply ignore the logical problems with the position.
JJ Hepburn's Shortform

Uncommon career advice

These are some of my lose, unstructured and possibly less common advice related to careers.

• When applying for jobs looking at resources for hiring managers can be much more helpful than resources for applicants.

• Apply too often rather than not often enough. I at times hear people chose not to apply for something because they assume it is unlikely that they will be accepted and their time would be better used upskilling in their field. I think that people should apply more often in these situations to get more experience with the app... (read more)

Thank you for this great post!

The first piece of advice about looking at resources for hiring managers is quite insightful! I will try to incorporate it in my job search.

I can totally relate to the remaining points. Entering the job market after a really long break has made me even less likely to apply than others. I find pushing myself to apply to a job as rather a learning as well as confidence-building experience of its own. Each application episode helps me overcome my fears. Rejection comes as not a rejection but another feedback that I can use to imp... (read more)

3vaidehi_agarwalla7dI'd also add (although I'm not sure this is uncommon) - Get accountability buddy(s) who are also applying to jobs to motivate you / with whom you can share draft applications on short notice etc. Even though you may end up spending ~1.5-2x the time you would have on job applications as a whole, you may on net end up doing / applying to more position.s
3Aaron Gertler7dI thought this was a great Shortform post! * The book for hiring managers I've seen referenced most often is Who [https://www.amazon.com/Who-Method-Hiring-Geoff-Smart-ebook/dp/B001EL6RWY]. If you're not sure what "resources" to look at, that's probably a good starting point. * "Apply too often rather than not often enough": I often tell people this, because: * Some people tend to underestimate their qualifications or suitability. You might be one of them! * (What JJ said about getting practice) * Even if you don't get the job, you might get a referral to other jobs if you do well during the process (I was hired this way, and I've helped at least one other person get hired this way) * EA-aligned orgs are generally quite open to feedback; if you find a specific process confusing or overly time-consuming, you can tell the org this; I think they'll be much more likely than most orgs to make changes in response (improving the experience for other applicants) * I'm not sure about the "application drafting" approach, but I recommend something similar: If a job interests you, look at an org's website (or LinkedIn) to find people who have that job or similar jobs. Look at what they did earlier in their careers. Consider sending a brief, polite email with a question or two, or asking for a quick call. Sometimes, people will just give you great advice for free. * And even if no one responds, you've still gotten a much better sense for how these career paths operate in the real world (which isn't always as restrictive as the stories we tell ourselves about getting a job).
Linch's Shortform

I've started trying my best to consistently address people on the EA Forum by username whenever I remember to do so, even when the username clearly reflects their real name (eg Habryka). I'm not sure this is the right move, but overall I think this creates slightly better cultural norms since it pushes us (slightly) towards pseudonymous commenting/"Old Internet" norms, which I think is slightly better for pushing us towards truth-seeking and judging arguments by the quality of the arguments rather than be too conscious of status-y/social monkey effects.

&nb... (read more)

Linch's Shortform

I think it might be interesting/valuable for someone to create "list of numbers every EA should know", in a similar vein to Latency Numbers Every Programmer Should Know and Key Numbers for Cell Biologists.

One obvious reason against this is that maybe EA is too broad and the numbers we actually care about are too domain specific to specific queries/interests, but nonetheless I still think it's worth investigating.

4Aaron Gertler4dI love this idea! Lots of fun ways to make infographics out of this, too. Want to start out by turning this into a Forum question where people can suggest numbers they think are important? (If you don't, I plan to steal your idea for my own karmic benefit.)

Thanks for the karmically beneficial tip! 

I've now posted this question in its own right.

 

6Habryka7dI think this is a great idea.
Nathan_Barnard's Shortform

If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in. 

What do you mean by correct?

When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ?

Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?

evelynciara's Shortform

Practical/economic reasons why companies might not want to build AGI systems

(Originally posted on the EA Corner Discord server.)

First, most companies that are using ML or data science are not using SOTA neural network models with a billion parameters, at least not directly; they're using simple models, because no competent data scientist would use a sophisticated model where a simpler one would do. Only a small number of tech companies have the resources or motivation to build large, sophisticated models (here I'm assuming, like OpenAI does, that model siz... (read more)

Ramiro's Shortform

IMF climate change challenge "How might we integrate climate change into economic analysis to promote green policies?

To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change." https://lnkd.in/dCbZX-B

evelynciara's Shortform

Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.

Buck's Shortform

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in
... (read more)
Showing 3 of 18 replies (Click to show all)

I wonder if there's something in between these two points:

  • they could check the most important  1-3 claims the author makes.   
  • they could include the kind of evidence and links for all claims that are made so readers can quickly check themselves
1Jordan Pieters7dPerhaps it would be worthwhile to focus on books like those in this [https://forum.effectivealtruism.org/posts/KNZLGbGevnjStgzHt/i-scraped-all-public-effective-altruists-goodreads-reading#Most_commonly_planned_to_read_books_that_have_not_been_read_by_anyone_yet] list of "most commonly planned to read books that have not been read by anyone yet"
2MichaelA9dYeah, I entirely agree, and your comment makes me realise that, although I make my process fairly obvious in my posts, I should probably in future add almost the exact sentences "I haven't fact-checked anything, looked for other perspectives, etc.", just to make that extra explicit. (I didn't interpret your comment as directed at my posts specifically - I'm just reporting a useful takeaway for me personally.)
MichaelA's Shortform

Collection of sources relevant to impact certificates/impact purchases/similar

Certificates of impact - Paul Christiano, 2014

The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)

The Case for Impact Purchase  | Part 1 - Linda Linsefors, 2020

Making Impact Purchases Viable - casebash, 2020

Plan for Impact Certificate MVP - lifelonglearner, 2020

Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019

Altruistic equity allocation - Paul Christiano, 2019

Social impact bond - Wikipe... (read more)

3schethik6moThe Health Impact Fund (cited above by MichaelA) is an implementation of a broader idea outlined by Dr. Aidan Hollis here: An Efficient Reward System for Pharmaceutical Innovation [https://www.who.int/intellectualproperty/news/en/Submission-Hollis.pdf]. Hollis' paper, as I understand it, proposes reforming the patent system such that innovations would be rewarded by government payouts (based on impact metrics, e.g. QALYs) rather than monopoly profit/rent. The Health Impact Fund, an NGO, is meant to work alongside patents (for now) and is intended to prove that the broader concept outlined in the paper can work. A friend and I are working on further broadening this proposal outlined by Dr. Hollis. Essentially, I believe this type of innovation incentive could be applied to other areas with easily measurable impact (e.g. energy, clean protein and agricultural innovations via a "carbon emissions saved" metric). We'd love to collaborate with anyone else interested (feel free to message me).

Hey schethik, did you make progess with this?

Nathan_Barnard's Shortform

A 6 line argument for AGI risk 

(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability  

(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be 

(3) An AGI will come into existence

(4)  If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met

(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI

(6) It is more morally valuable for human goals to be met than an AGIs goals

gavintaylor's Shortform

At the start of Chapter 6 in the precipice, Ord writes:

To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others a
... (read more)

According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than ... (read more)

Linch's Shortform

Recently I was asked for tips on how to be less captured by motivated reasoning and related biases, a goal/quest I've slowly made progress on for the last 6+ years. I don't think I'm very good at this, but I do think I'm likely above average, and it's also something I aspire to be better at. So here is a non-exhaustive and somewhat overlapping list of things that I think are helpful:

... (read more)
Linch's Shortform

Should there be a new EA book, written by somebody both trusted by the community and (less importantly) potentially externally respected/camera-friendly?

Kinda a shower thought based on the thinking around maybe Doing Good Better is a bit old right now for the intended use-case of conveying EA ideas to newcomers.

I think the 80,000 hours and EA handbooks were maybe trying to do this, but for whatever reason didn't get a lot of traction?

I suspect that the issue is something like not having a sufficiently strong "voice"/editorial line, and what you want for a ... (read more)

4Jamie_Harris11dDoes the Precipice count? And I think Will Macaskill is writing a new book. But I have the vague sense that public-facing books may be good for academics' careers anyway. Evidence for this intuition: (1) Where EA academics have written them, they seem to be more highly cited than a lot of their other publications, so the impact isn't just "the public" (see Google Scholar pages for Will Macaskill, Toby Ord, Nick Bostrom, Jacy Reese Anthis -- and let me know if there are others who have written public-facing books! Peter Singer would count but has no Google Scholar page) (2) this article about the impact of Wikipedia. It's not about public-facing books but fits into my general sense that "widely viewed summary content by/about academics can influence other academics" https://conference.druid.dk/acc_papers/2862e909vshtezgl6d67z0609i5bk6.pdf [https://conference.druid.dk/acc_papers/2862e909vshtezgl6d67z0609i5bk6.pdf] Plus all the usual stuff about high fidelity idea transmission being good. So yes, more EA books would be good?

I think The Precipice is good, both directly and as a way to communicate a subsection of EA thought, but EA thought is not predicated on a high probability of existential risk, and the nuance might be lost on readers if The Precipice becomes the default "intro to EA" book.

JonathanSalter's Shortform

Hi!

We'll be holding rounds of "introduction to EA" talks in Sweden this summer. Does anyone know if there are scripts and/or slides (in English, or alternatively in Swedish, if available) that have already been developed that we could go off of? And is there someone here with experience with giving introductory talks that would be willing to give some tips and pointers? Would be super appreciated!

Showing 3 of 5 replies (Click to show all)
2JonathanSalter11dBrilliant, thanks so much Michael!
1KevinO12dAnd you could try looking around in https://docs.google.com/spreadsheets/d/1ATRWGcN3GLouaWJIa6Za3xbLe5nuk0CQHhwhsBLTDvA/edit#gid=0

Thank you Kevin, I wasn't aware of that document, that's really helpful!

Michael Huang's Shortform

Humanitarian Assistance for Wild Animals

New article about wild animal suffering, interventions, genome editing and gene drives:

Johannsen, Kyle (2021). Humanitarian Assistance for Wild Animals. The Philosophers' Magazine 93:33-37. Available on PhilArchive: https://philarchive.org/archive/JOHHAF-5

Ben_Snodin's Shortform

Causal vs evidential decision theory

I wrote this last Autumn as a private “blog post” shared only with a few colleagues. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. Decision theories are a pretty well-worn topic in EA circles and I'm definitely not adding new insights here. These are just some fairly naive thoughts-out-loud about how CDT and EDT handle various scenarios. If you've already thought a lot about decision theory you probably won't learn anything from this.

T... (read more)

6Linch12dAre you familiar with MIRI's work on this? One recent iteration is Functional Decision Theory [https://philpapers.org/rec/LEVCDI], though it is unclear to me if they made more recent progress since then. It took me a long time to come around to it, but I currently buy that FDT is superior to CDT in the twin prisoner's dilemma case, while not falling to evidential blackmail (the way EDT does), as well as being notably superior overall in the stylized situation of "how should an agent relate to a world where other smarter agents can potentially read the agent's source code"

Thanks that's interesting, I've heard of it but I haven't looked into it.

Max_Daniel's Shortform

[PAI vs. GPAI]

So there is now (well, since June 2020) both a Partnership on AI and a Global Partnership on AI.

Unfortunately GPAI's and PAI's FAQ pages conspicuously omit "how are you differnet from (G)PAI?".

Can anyone help?

At first glance it seems that:

  • PAI brings together a very large number of below-state actors of different types: e.g., nonprofits, academics, for-profit AI labs, ...
  • GPAI members are countries
  • PAI's work is based on 4 high-level goals that each are described in about two sentences [?]
  • GPAI's work is based on the OECD Recommendation on Artifi
... (read more)

I think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation surrounding AI.

Linch's Shortform

Minor UI note: I missed the EAIF AMA multiple times (even after people told me it existed) because my eyes automatically glaze over pinned tweets. I may be unusual in this regard, but thought it worth flagging anyway.

Load More