Shortform Content [Beta]

Jack R's Shortform

I have found it useful and interesting to build a habit of noticing an intuition and then thinking of arguments for why that intuition is worth listening to. It has caused me to find some pretty interesting dynamics that it seems like naive consequentialists/utilitarians aren't aware of.

One concern about this is that you might be able to find arguments for any conclusion that you seek out arguments for; the counter to this is that your intuition doesn't give random answers, and is actually fairly reliably correct, hence explicit arguments that explain your... (read more)

Ramiro's Shortform

Essay Prize of the Portuguese Philosophy Society - Philosophical papers on Artificial intelligence I'm not sure this will interest top researchers in AI philosophy, but maybe someone might see this as a low-hanging fruit: the "PRÉMIO DE ENSAIO DA SOCIEDADE PORTUGUESA DE FILOSOFIA" of this year is about the challenges AI poses for "the philosophical understanding of the human".

"Que desafios pode a inteligência artificial colocar à compreensão filosófica do humano?” Link: https://www.spfil.pt/regulamento_premio_ensaio_spf deadline: feb 2023 prize: €3,000

Kaleem's Shortform

I'm working on building a community building-centric EA outreach office in Harvard square, and we still don't have a great name for the office (e.g. Constellation, Lightcone, Trajan House). 

Please Suggest some names that you think would be great (maybe with some explanation) and you might get to name a long-lasting piece of EA community infrastructure ! 

The ones that come to my mind are Momentum, Gravity well, Embedding and Pulsar.

But you might want to contact naming what we can for further suggestions (maybe you could even get "Constellation" or "Lightcone" and they get another name!)

JamesOz's Shortform

Social Change Lab is trying something new, and compiling interesting social movement-related research and news into a monthly (or so) digest. Check out the first edition here and sign up to receive future editions here. Feedback very much welcome!

Patrick Wilson's Shortform

Hello and help! 
I'm preparing a proposal for funding from the John Templeton Foundation on a 3-year public engagement project around longevity and healthy ageing. I am not a charity but an individual NFP doing this stuff on top of my day job. 

Do any of you lovely AEers have any experience with the John Templeton Foundation in either application or grant / project management? 

The deadline is 12 Aug, so please message me if you can advise or potentially collaborate.

ricoh_aficio's Shortform

Use this tool to find the vegan protein bar that's best for you:
https://docs.google.com/spreadsheets/d/1WYsVzQI79So6S5dLqba0lVAhJ03zXYMvLenmPdtPhbg/edit?usp=sharing

Sophia's Shortform

The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.

Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)

utilistrutil's Shortform

EAG SF Was Too Boujee.

I have no idea what the finances for the event looked like, but I'll assume the best case that CEA at least broke even.

The conference seemed extravagant to me. We don't need so much security or staff walking around to collect our empty cups. How much money was spent to secure an endless flow of wine? There were piles of sweaters left at the end; attendees could opt in with their sizes ahead of time to calibrate the order.

Particularly in light of recent concerns about greater funding, it would behoove us to consider the harms of an opu... (read more)

Hi — I’m Eli from the EA Global team. Thanks for your thoughts on this — appreciate your concerns here. I’ll try to chip in with some context that may be helpful. To address your main underlying point, my take is that EA Globals have incredibly high returns on investment — EA orgs and members of the community report incredibly large amounts of value from our events. For example:

  • An attendee from an EA-aligned org said they would probably trade $5 million in donations for the contacts they made at EAGxBoston.
  • Another EA-aligned org reporting that they’ve gott
... (read more)
6Annabella Wheatley7d
I really agree. I think there is large benefits to things being “comfy” eg having good food and snacks, nice areas to sit and socialise etc etc however it makes me feel super icky attending fancy EAGs. (I also don’t know how standard this is for conferences). Unlimited beverages has got to be unnecessary (and expensive).
Yonatan Cale's Shortform

4 EAG(x) events - and I still get a lot of value

Someone asked me "you already know the EA community, no? how come do you still get value from EAG?"

Well - I live in Israel. Contacting people from the international EA community is really hard. I need to discover they exist, email them, hope they reply, and at best - set up a 30 minute call or so. This is such high friction.

At EAG, I can run my project plans by.. everyone. easily. I even had productive Uber rides.

That's the value of EAG for me.

niplav's Shortform

epistemic status: Borderline schizopost, not sure I'll be able to elaborate much better on this, but posting anyway, since people always write that one should post on the forum. Feel free to argue against. But: Don't let this be the only thing you read that I've written.

Effective Altruism is a Pareto Frontier of Truth and Power

In order to be effective in the world one needs to coordinate (exchange evidence, enact plans in groups, find shared descriptions of the world) and interact with hostile entities (people who lie, people who want to steal your reso... (read more)

There is also the thing where having more truth leads to more power, for instance by realizing that in some particular case the EMH is false.

Rowan Oakes's Shortform

I can't find information about investing in renewable energy (beyond nuclear) and Internet infrastructure in the EA forum or community. Could someone please direct me towards some threads and/or organizations to support? Thank you.

Annabella Wheatley's Shortform

Do we all need to do intense cause prio thinking? 

Some off the cuff thoughts:

Currently I’m working on doing cause prio, finding my key uncertainties, trying to figure out what the most important problem is and how I can help solve it.  Every time I feel I’m getting somewhere in my thinking I come up with 10 new things to consider. Although I enjoy this as an exercise it does take up a lot of time and its hard to know how “worth it” doing this is. I‘m now wondering were a good stopping point is / what proportion of time is useful to spend on think... (read more)

calebp's Shortform

(crosspost of a comment on imposter syndrome that I sometimes refer to)

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. 

So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing p... (read more)

lumenwrites's Shortform

I want to donate some money (not much, just what I can afford) to AGI Alignment research, to whatever organization has the best chance of making sure that AGI goes well and doesn't kill us all. What are my best options, where can I make the most difference per dollar?

I'm new to this forum, I don't understand this field well enough to know which AGI donation will be the most effective, and I'm hoping you guys can help me out.

sreers's Shortform

Hi all! I'm planning to think through cause / path prioritization to inform my career plans and have laid out a high-level plan for this process. If anyone has a chance to take a look at it and leave any feedback that comes to mind, I'd really appreciate it! Thanks so much!

WilliamKiely's Shortform

Will MacAskill, 80,000 Hours Podcast May 2022:

Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.

I'm flagging this as something that I'm personally unsure about and tentatively disagree with.

It's unclear how much more MacAskill means by "much". My interpretation was that he probably meant something like 2-10x more likely.

My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.

Full excerpt for those curious:

Will MacAskill:
... (read more)

I just asked Will about this at EAG and he clarified that (1) he's talking about non-AI risk, (2) by "much" more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby's view; Will said he puts bio xrisk at something like 0.5% by 2100.

Noah_il_Matto's Shortform

SOP for EAG Conferences

1 - clarify your goals

2- clarify types of people you’d like to have 1-1s with to meet these goals

3- pick workshops you want to go to

4- in Swapcard app, delete the 1-1 time slots that are during workshops

5- search Swapcard attendee list for relevant keywords for 1-1s

6- make 1-1s, scheduled in location where it will be easier to find ppl (ie not main networking area) — ask organizers if unsure of what this will be in advance

Notes

-don’t worry about talks since they’re recorded

-actually use 1-1 time slot feature on Swapcard (by removing... (read more)

Samin's Shortform

How do effectiveness estimates change if everyone saved dies in 10 years?

“Saving lives near the precipice”

Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?

[I’m highly uncertain about this, and I haven’t done much thinking or research]

For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.

It would be interesting to ... (read more)

222tom16d
I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets [https://docs.google.com/spreadsheets/d/1Kq6iHSQFr3eRz1p9KclHuJTQiaJYkpViOyneSd9KCJc/edit#gid=1362437801] . So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year [Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]
1Samin14d
I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better. (It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon [https://www.lesswrong.com/posts/dpMZHpA59xFFjCqBp/the-value-of-a-life]. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)
Load More