I have found it useful and interesting to build a habit of noticing an intuition and then thinking of arguments for why that intuition is worth listening to. It has caused me to find some pretty interesting dynamics that it seems like naive consequentialists/utilitarians aren't aware of.One concern about this is that you might be able to find arguments for any conclusion that you seek out arguments for; the counter to this is that your intuition doesn't give random answers, and is actually fairly reliably correct, hence explicit arguments that explain your... (read more)
Essay Prize of the Portuguese Philosophy Society - Philosophical papers on Artificial intelligence
I'm not sure this will interest top researchers in AI philosophy, but maybe someone might see this as a low-hanging fruit:
the "PRÉMIO DE ENSAIO DA SOCIEDADE PORTUGUESA DE FILOSOFIA" of this year is about the challenges AI poses for "the philosophical understanding of the human".
"Que desafios pode a inteligência artificial colocar à compreensão filosófica do humano?”
deadline: feb 2023
I'm working on building a community building-centric EA outreach office in Harvard square, and we still don't have a great name for the office (e.g. Constellation, Lightcone, Trajan House).
Please Suggest some names that you think would be great (maybe with some explanation) and you might get to name a long-lasting piece of EA community infrastructure !
The ones that come to my mind are Momentum, Gravity well, Embedding and Pulsar.
But you might want to contact naming what we can for further suggestions (maybe you could even get "Constellation" or "Lightcone" and they get another name!)
Social Change Lab is trying something new, and compiling interesting social movement-related research and news into a monthly (or so) digest. Check out the first edition here and sign up to receive future editions here. Feedback very much welcome!
Hello and help! I'm preparing a proposal for funding from the John Templeton Foundation on a 3-year public engagement project around longevity and healthy ageing. I am not a charity but an individual NFP doing this stuff on top of my day job.
Do any of you lovely AEers have any experience with the John Templeton Foundation in either application or grant / project management?
The deadline is 12 Aug, so please message me if you can advise or potentially collaborate.
Use this tool to find the vegan protein bar that's best for you:https://docs.google.com/spreadsheets/d/1WYsVzQI79So6S5dLqba0lVAhJ03zXYMvLenmPdtPhbg/edit?usp=sharing
The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.
Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)
I have no idea what the finances for the event looked like, but I'll assume the best case that CEA at least broke even.
The conference seemed extravagant to me. We don't need so much security or staff walking around to collect our empty cups. How much money was spent to secure an endless flow of wine? There were piles of sweaters left at the end; attendees could opt in with their sizes ahead of time to calibrate the order.
Particularly in light of recent concerns about greater funding, it would behoove us to consider the harms of an opu... (read more)
Hi — I’m Eli from the EA Global team. Thanks for your thoughts on this — appreciate your concerns here. I’ll try to chip in with some context that may be helpful. To address your main underlying point, my take is that EA Globals have incredibly high returns on investment — EA orgs and members of the community report incredibly large amounts of value from our events. For example:
Someone asked me "you already know the EA community, no? how come do you still get value from EAG?"
Well - I live in Israel. Contacting people from the international EA community is really hard. I need to discover they exist, email them, hope they reply, and at best - set up a 30 minute call or so. This is such high friction.
At EAG, I can run my project plans by.. everyone. easily. I even had productive Uber rides.
That's the value of EAG for me.
epistemic status: Borderline schizopost, not sure I'll be able to elaborate much better on this, but posting anyway, since people always write that one should post on the forum. Feel free to argue against. But: Don't let this be the only thing you read that I've written.
In order to be effective in the world one needs to coordinate
(exchange evidence, enact plans in groups, find shared descriptions
of the world) and interact with hostile entities (people who lie,
people who want to steal your reso... (read more)
There is also the thing where having more truth leads to more power, for instance by realizing that in some particular case the EMH is false.
This seems like a major success in influencing US policy.
I can't find information about investing in renewable energy (beyond nuclear) and Internet infrastructure in the EA forum or community. Could someone please direct me towards some threads and/or organizations to support? Thank you.
Do we all need to do intense cause prio thinking?
Some off the cuff thoughts:
Currently I’m working on doing cause prio, finding my key uncertainties, trying to figure out what the most important problem is and how I can help solve it. Every time I feel I’m getting somewhere in my thinking I come up with 10 new things to consider. Although I enjoy this as an exercise it does take up a lot of time and its hard to know how “worth it” doing this is. I‘m now wondering were a good stopping point is / what proportion of time is useful to spend on think... (read more)
(crosspost of a comment on imposter syndrome that I sometimes refer to)I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing p... (read more)
I want to donate some money (not much, just what I can afford) to AGI Alignment research, to whatever organization has the best chance of making sure that AGI goes well and doesn't kill us all. What are my best options, where can I make the most difference per dollar?
I'm new to this forum, I don't understand this field well enough to know which AGI donation will be the most effective, and I'm hoping you guys can help me out.
I'm planning to think through cause / path prioritization to inform my career plans and have laid out a high-level plan for this process. If anyone has a chance to take a look at it and leave any feedback that comes to mind, I'd really appreciate it!
Thanks so much!
Will MacAskill, 80,000 Hours Podcast May 2022:
Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
I'm flagging this as something that I'm personally unsure about and tentatively disagree with.
It's unclear how much more MacAskill means by "much". My interpretation was that he probably meant something like 2-10x more likely.
My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.
Full excerpt for those curious:
I just asked Will about this at EAG and he clarified that (1) he's talking about non-AI risk, (2) by "much" more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby's view; Will said he puts bio xrisk at something like 0.5% by 2100.
SOP for EAG Conferences
1 - clarify your goals
2- clarify types of people you’d like to have 1-1s with to meet these goals
3- pick workshops you want to go to
4- in Swapcard app, delete the 1-1 time slots that are during workshops
5- search Swapcard attendee list for relevant keywords for 1-1s
6- make 1-1s, scheduled in location where it will be easier to find ppl (ie not main networking area) — ask organizers if unsure of what this will be in advance
-don’t worry about talks since they’re recorded
-actually use 1-1 time slot feature on Swapcard (by removing... (read more)
“Saving lives near the precipice”
Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?
[I’m highly uncertain about this, and I haven’t done much thinking or research]
For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.
It would be interesting to ... (read more)
Wrote a post: https://forum.effectivealtruism.org/posts/hz2Q8GgZ28YKLazGb/saving-lives-near-the-precipice-we-re-doing-it-wrong