José Gonzalez (GWWC member, EA Global performer, winner of a Swedish Grammy award) just released a new song inspired by EA and (maybe?) The Precipice.
Speak upStand downPick your battlesLook aroundReflectUpdatePause your intuitions and deal with it
It's not as direct as the songs in the Rationalist Solstice, but it's more explicitly EA-vibey than anything I can remember from his (apparently) Peter Singer-inspired 2007 album, In Our Nature.
A case of precocious policy influence, and my pitch for more research on how to get a top policy job.
Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Colum... (read more)
My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.
Uncommon career advice
These are some of my lose, unstructured
and possibly less common advice related to careers.
• When applying for jobs looking at resources for hiring managers can be much more helpful than resources for applicants.
• Apply too often rather than not often enough. I at times hear people chose not to apply for something because they assume it is unlikely that they will be accepted and their time would be better used upskilling in their field. I think that people should apply more often in these situations to get more experience with the app... (read more)
Thank you for this great post!
The first piece of advice about looking at resources for hiring managers is quite insightful! I will try to incorporate it in my job search.
I can totally relate to the remaining points. Entering the job market after a really long break has made me even less likely to apply than others. I find pushing myself to apply to a job as rather a learning as well as confidence-building experience of its own. Each application episode helps me overcome my fears. Rejection comes as not a rejection but another feedback that I can use to imp... (read more)
I've started trying my best to consistently address people on the EA Forum by username whenever I remember to do so, even when the username clearly reflects their real name (eg Habryka). I'm not sure this is the right move, but overall I think this creates slightly better cultural norms since it pushes us (slightly) towards pseudonymous commenting/"Old Internet" norms, which I think is slightly better for pushing us towards truth-seeking and judging arguments by the quality of the arguments rather than be too conscious of status-y/social monkey effects.&nb... (read more)
I think it might be interesting/valuable for someone to create "list of numbers every EA should know", in a similar vein to Latency Numbers Every Programmer Should Know and Key Numbers for Cell Biologists.
One obvious reason against this is that maybe EA is too broad and the numbers we actually care about are too domain specific to specific queries/interests, but nonetheless I still think it's worth investigating.
Thanks for the karmically beneficial tip! I've now posted this question in its own right.
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in.
What do you mean by correct?
When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ?
Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?
Practical/economic reasons why companies might not want to build AGI systems
(Originally posted on the EA Corner Discord server.)
First, most companies that are using ML or data science are not using SOTA neural network models with a billion parameters, at least not directly; they're using simple models, because no competent data scientist would use a sophisticated model where a simpler one would do. Only a small number of tech companies have the resources or motivation to build large, sophisticated models (here I'm assuming, like OpenAI does, that model siz... (read more)
IMF climate change challenge
"How might we integrate climate change into economic analysis to promote green policies?
To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change."
Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.
Here's a crazy idea. I haven't run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
I wonder if there's something in between these two points:
Certificates of impact - Paul Christiano, 2014
The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)
The Case for Impact Purchase | Part 1 - Linda Linsefors, 2020
Making Impact Purchases Viable - casebash, 2020
Plan for Impact Certificate MVP - lifelonglearner, 2020
Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019
Altruistic equity allocation - Paul Christiano, 2019
Social impact bond - Wikipe... (read more)
Hey schethik, did you make progess with this?
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be
(3) An AGI will come into existence
(4) If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
At the start of Chapter 6 in the precipice, Ord writes:
To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others a
According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than ... (read more)
Recently I was asked for tips on how to be less captured by motivated reasoning and related biases, a goal/quest I've slowly made progress on for the last 6+ years. I don't think I'm very good at this, but I do think I'm likely above average, and it's also something I aspire to be better at. So here is a non-exhaustive and somewhat overlapping list of things that I think are helpful:
Should there be a new EA book, written by somebody both trusted by the community and (less importantly) potentially externally respected/camera-friendly?Kinda a shower thought based on the thinking around maybe Doing Good Better is a bit old right now for the intended use-case of conveying EA ideas to newcomers.I think the 80,000 hours and EA handbooks were maybe trying to do this, but for whatever reason didn't get a lot of traction?I suspect that the issue is something like not having a sufficiently strong "voice"/editorial line, and what you want for a ... (read more)
I think The Precipice is good, both directly and as a way to communicate a subsection of EA thought, but EA thought is not predicated on a high probability of existential risk, and the nuance might be lost on readers if The Precipice becomes the default "intro to EA" book.
Hi!We'll be holding rounds of "introduction to EA" talks in Sweden this summer. Does anyone know if there are scripts and/or slides (in English, or alternatively in Swedish, if available) that have already been developed that we could go off of? And is there someone here with experience with giving introductory talks that would be willing to give some tips and pointers? Would be super appreciated!
Thank you Kevin, I wasn't aware of that document, that's really helpful!
New article about wild animal suffering, interventions, genome editing and gene drives:
Johannsen, Kyle (2021). Humanitarian Assistance for Wild Animals. The Philosophers' Magazine 93:33-37. Available on PhilArchive: https://philarchive.org/archive/JOHHAF-5
Causal vs evidential decision theory
I wrote this last Autumn as a private “blog post” shared only with a few colleagues. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. Decision theories are a pretty well-worn topic in EA circles and I'm definitely not adding new insights here. These are just some fairly naive thoughts-out-loud about how CDT and EDT handle various scenarios. If you've already thought a lot about decision theory you probably won't learn anything from this.
T... (read more)
Thanks that's interesting, I've heard of it but I haven't looked into it.
[PAI vs. GPAI]
So there is now (well, since June 2020) both a Partnership on AI and a Global Partnership on AI.
Unfortunately GPAI's and PAI's FAQ pages conspicuously omit "how are you differnet from (G)PAI?".
Can anyone help?
At first glance it seems that:
I think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation surrounding AI.
Minor UI note: I missed the EAIF AMA multiple times (even after people told me it existed) because my eyes automatically glaze over pinned tweets. I may be unusual in this regard, but thought it worth flagging anyway.