All posts

Recent comments

Today and yesterday
Today and yesterday

Frontpage Posts

Personal Blogposts

Quick takes

Are you an EU citizen? If so, please sign this citizen’s initiative to phase out factory farms (this is an approved EU citizen’s initiative, so if it gets enough signatures the EU has to respond): stopcrueltystopslaughter.com It also calls for reducing the number of animal farms over time, and introducing more incentives for the production of plant proteins. (If initiatives like these interest you, I occasionally share more of them on my blog) EDIT: If it doesn't work, try again in a couple hours/days. The collection has just started and the site may be overloaded. The deadline is in a year, so no need to worry about running out of time.
I've been thinking a bunch about a fundamental difference between the EA community and the LessWrong community. LessWrong is optimized for the enjoyment of its members. Any LessWrong event I go to in any city the focus is on "what will we find fun to do?" This is great. Notice how the community isn't optimized for "making the world more rational." It is a community that selects for people interested in rationality and then when you get these kinds of people in the same room the community tries to optimize for FUN for these kinds of people. EA as a community is NOT optimized for the enjoyment of its members. It is optimized for making the world a better place. This is a feature, not a bug. And surely it should be net positive since its goal should by definition be net positive. When planning an EAG or EA event you measure it on impact and say professional connections made and how many new high quality AI Alignment researchers you might have created on the margin. You don't measure it on how much people enjoyed themselves (or you do, but for instrumental reasons to get more people to come so that you can continue to have impact). As a community organizer in both spaces, I do notice it is easier that I can leave EA events I organized feeling more burnt out and less fulfilled than compared to similar LW/ACX events. I think the fundamental difference mentioned before explains why. Dunno if I am pointing at anything that resonates with anyone. I don't see this discussed much among community organizers. Seems important to highlight.  Basically in LW/ACX spaces - specifically as an organizer - I more easily feel like a fellow traveller up for a good time. In EA spaces - specifically as an organizer - I more easily feel like an unpaid recruiter.
For EA folks in tech, I'm still giving mock interviews. I'm bumping this into quick takes because my post is several years old, and I don't advertise it well.
People often justify a fast takeoff of AI by pointing to how fast AI could improve beyond some point. But The Great Data Integration Schlep is an excellent LW post about the absolute sludge of trying to do data work inside corporate bureaucracy. The key point is that even when companies seemingly benefit from having much more insights into their work, a whole slew of incentive problems and managerial foibles prevent this from being realized. She applies this to be skeptical of AI takeoff: This is also the story of computers, and the story of electricity. A transformative new technology was created, but it took decades for its potential to be realized because of all the existing infrastructure that had to be upended to maximize its impact. In general, even if AI is technologically unprecedented, the social infrastructure through which AI will be deployed is much more precedented, and we should consider those barriers as actually slowing down AI impacts.
I am 90% sure that most AI Safety talent aren't thinking hard enough about what Neglectedness. The industry is so nascent that you could look at 10 analogous industries, see what processes or institutions are valuable and missing and build an organisation around the highest impact one.  The highest impact job ≠ the highest impact opportunity for you!  

Past week
Past week

Frontpage Posts

Quick takes

Hey!   I’m Kevin, an aspiring software engineer from Ghana. I’ve recently done the 80k 8 weeks career course and I’m excited to start earning-to-give. I’m looking for a mentor to help me land my first job and help me plan my career to maximise my expected donations! I’m in GMT +2. If you’d like to meet and figure out if we’re a good fit, please drop me an email on thekevin [dot] afachao [at] gmail [dotcom] or message me on LinkedIn.
Kelsey Piper’s article on SB 1047 says I’ve seen similar statements elsewhere too. But after I spent some time today reading through the bill, this seems to be wrong? Liability for developers doesn’t seem to be dependent on whether “critical harm” is actually done. Instead, if the developer fails to take reasonable care to prevent critical harm (or some other violation), even if there is no critical harm done, violations that cause death/bodily harm/etc can lead to fines of 10% or 30% of compute. Here’s the relevant section from the bill: Has there been discussion about this somewhere else already? Is the Vox article wrong or am I misunderstanding the bill?
I'm starting to put together plans for this year's Giving Season events (roughly, start of November to end of December). If you remember the events last year, it'd be cool to know: 1- What was memorable to you from that period? 2- Was anything in particular valuable for you or for your donation decisions? 3- Is there anything you would expect to see this year? 4- What would you hope to see this year? Thanks!
Big AIS news imo: “The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States.” https://www.commerce.gov/news/press-releases/2024/09/us-secretary-commerce-raimondo-and-us-secretary-state-blinken-announce H/T @shakeel
Organizing good EAGx meetups EAGx conferences often feature meetups for subgroups with a shared interest / identity, such as "animal rights", "academia" or "women". Very easy to set up - yet some of the best events. Four forms I've seen are a) speed-friending b) brainstorming topics & discussing them in groups c) red-teaming projects d) just a big pile of people talking If you want to maximize the amount of information transferred, form a) seems optimal purely because 50% of people are talking at any point in time in a personalized fashion. If you want to add some choice, you can start by letting people group themselves / order themselves on some spectrum. Presenting this as "human cluster-analysis" might also make it into a nerdy icebreaker. Works great with 7 minute rounds, at the end of which you're only nudged, rather than required, to shift partners. I loved form c) for AI safety projects at EAGx Berlin. Format: A few people introduce their projects to everyone, then grab a table and present them in more detail to smaller groups. This form might in general be used to allow interesting people to hold small low-effort interactive lectures & utilizing interested people as focus groups. Form b) seems to be most common for interest-based meetups. It usually includes 1) group brainstorming of topics 2) voting on the topics 3) splitting up 4) presentations. This makes up for a good low-effort event that's somewhere between a lecture and a 1-on-1 in terms of required energy. However, I see 4 common problems with this format: Firstly, steps 1) and 2) take a lot of time and create unnaturally clustered topics (as brainstorming creates topics "token-by-token", rather than holistically). Secondly, in ad hoc groups with >5 members, it's hard to coordinate who takes the word and in turn, conversations can turn into sequences of separate inputs, e.g. members build less upon themselves. Thirdly, spontaneous conversations are hard to compress into useful takeaways that

Past 14 days
Past 14 days

Frontpage Posts

96
· · 4m read

Quick takes

Happy Ozone Day! The Montreal Protocol, a universally ratified treaty phasing out the use of ozone-destroying CFCs, was signed 37 years ago today. It remains one of the greatest examples of international cooperation to date.
It seems like some of the biggest proponents of SB 1047 are Hollywood actors & writers (ex. Mark Ruffalo)—you might remember them from last year’s strike. I think that the AI Safety movement has a big opportunity to partner with organised labour the way the animal welfare side of EA partnered with vegans. These are massive organisations with a lot of weight and mainstream power if we can find ways to work with them; it’s a big shortcut to building serious groundswell rather than going it alone. See also Yanni’s work with voice actors in Australia—more of this!
Is anyone in the AI Governance-Comms space working on what public outreach should look like if lots of jobs start getting automated in < 3 years?  I point to Travel Agents a lot not to pick on them, but because they're salient and there are lots of them. I think there is a reasonable chance in 3 years that industry loses 50% of its workers (3 million globally). People are going to start freaking out about this. Which means we're in "December 2019" all over again, and we all remember how bad Government Comms were during COVID. Now is the time to start working on the messaging!  
EDIT: Someone on lesswrong linked a great report by Epoch which tries to answer exactly this. With the release of openAI o1, I want to ask a question I've been wondering about for a few months. Like the chinchilla paper, which estimated the optimal ratio of data to compute, are there any similar estimates for the optimal ratio of compute to spend on inference vs training? In the release they show this chart: The chart somewhat gets at what I want to know, but doesn't answer it completely. How much additional inference compute would I need a 1e25 o1-like model to perform as well as a one shotted 1e26? Additionally, for some x number of queries, what is the optimal ratio of compute to spend on training versus inference? How does that change for different values of x? Are there any public attempts at estimating this stuff? If so, where can I read about it?
3
Phib
13d
6
Worth having some sort of running and contributable-to tab for open questions? Can also encourage people to flag open questions they see in posts.

Past 31 days

Frontpage Posts

71
· · 21m read

Quick takes

174
abrahamrowe
1mo
25
Reflections on a decade of trying to have an impact Next month (September 2024) is my 10th anniversary of formally engaging with EA. This date marks 10 years since I first reached out to the Foundational Research Institute about volunteering, at least as far as I can tell from my emails. Prior to that, I probably had read a fair amount of Peter Singer, Brian Tomasik, and David Pearce, who might all have been considered connected to EA, but I hadn’t actually actively tried engaging with the community. I’d been engaged with the effective animal advocacy community for several years prior, and I think I’d volunteered for The Humane League some, and had seen some of The Humane League Labs’ content online. I’m not sure if The Humane League counted as being “EA” at the time (this was a year before OpenPhil made its first animal welfare grants). This post is me roughly trying to guess at my impact since then, and reflections on how I’ve changed as a person, both on my own and in response to EA. It’s got a lot of broad reflections about how my feelings about EA have changed. It isn’t particularly rigorously or transparently reasoned — it’s more of a reflection exercise for myself than anything else. I’m mainly trying to look at what I’ve worked on with a really critical eye. I make a lot of claims here that I don't provide evidence for. I’m sharing this because I think the major update I’ve had from doing this is that while I’ve generally done many of the “working-in-EA” things that are often presented as high impact, I personally feel much more tangibly the impact of my donations, and right now, if I think what’s made me feel best about being in EA, it’s actions more in the earning-to-give direction than the direct work direction. My high-level view of my impact over this period is something like: * $30,000 in counterfactually good donations. * Overall unclear results for animals, though hopefully will have a big impact for future (primarily wild and invertebrate)
I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community. I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin). I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn't do operations work unless they clearly fail at research first. I've personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).
118
NickLaing
18d
0
Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes. "On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams." It seems a disaster that the Gates foundation are funding and promoting the rapid scale up of factory farming in Africa, and reversing this seems potentially tractable to me. Could individuals, Gates insiders or the big animal rights orgs take this up?  
91
Linch
1mo
7
The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"   Overall this makes me more optimistic that international treaties with teeth on GCRs from AI is possible, potentially before we have warning shots from large-scale harms.
35
Buck
1mo
12
Alex Wellerstein notes the age distribution of Manhattan Project employees: Sometimes people criticize EA for having too many young people; I think that this age distribution is interesting context for that. [Thanks to Nate Thomas for sending me this graph.]

Since July 1st

Frontpage Posts

Quick takes

106
Ozzie Gooen
2mo
33
I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important - like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point. This is very frustrating to me. First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation - not having EAs be allowed in many orgs makes this very difficult. Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here. And a lighter third - it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life. Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are similar stigmas against people across the political spectrum, for example, to certain crowds. A big part of “talking about morality and important issues, while having something non-obvious to say” is being hated by a bunch of people. In this vein, arguably we should be aiming for a world where it winds up that there’s a larger stigma. But a lot clearly has to do with the decisions made by what seems like a few EAs. FTX hurt the most. I think the OpenAI board situation resulted in a lot of ea-paranoia, arguably with very little upside. More recently, I think that certain EA actions in ai policy are getting a lot of flak. There was a brief window, pre-FTX-fail, where there was a very positive EA media push. I’ve seen almost nothing since. I think that “EA marketing” has been highly
One of the weaker parts of the Situational Awareness essay is Leopold's discussion of international AI governance. He argues the notion of an international treaty on AI "fanciful", claiming that: * It would be easy to "break out" of treaty restrictions * There would be strong incentives to do so * So the equilibrium is unstable That's basically it - international cooperation gets about 140 words of analysis in the 160 page document. I think this is seriously underargued. Right now it seems harmful to propagate a meme like "International AI cooperation is fanciful".  This is just a quick take, but I think it's the case that: * It might not be easy to break out of treaty restrictions. Of course it will be hard to monitor and enforce a treaty. But there's potential to make it possible through hardware mechanisms, cloud governance, inspections, and other mechanisms that we haven't even thought of yet. Lots of people are paying attention to this challenge and working on it. * There might not be strong incentives to do so. Decisionmakers may take the risks seriously and calculate the downsides of an all-out race exceed the potential benefits of winning. Credible benefit-sharing and shared decision-making institutions may convince states they're better off cooperating than trying to win a race. * International cooperation might not be all-or-nothing. Even if we can't (or shouldn't!) institute something like a global pause, cooperation on more narrow issues to mitigate threats from AI misuse and loss of control could be possible. Even in the midst of the Cold War, the US and USSR managed to agree on issues like arms control, non-proliferation, and technologies like anti-ballistic missile tech. (I critiqued a critique of Aschenbrenner's take on international AI governance here, so I wanted to clarify that I actually do think his model is probably wrong here.)
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take! https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
61
Joris P
1mo
1
Many semesters are about to kick off in the next ~month, meaning the busiest and most important time of the year is coming up for many EA university group organizers.   I'm very grateful for the work of university group organizers around the world. University groups have been a place where so many people learned about EA ideas and met others who are equally motivated to do good in an impartial and scope-sensitive way. Many of the people who got involved with EA through university groups are now making progress on fighting very difficult problems in the world. Thank you to everyone who was and is making that possible by helping run a university group!   If you know a university group organizer, please consider sending them a message to wish them all the best with promoting EA ideas this month and beyond!
57
JWS 🔸
1mo
8
<edit: Ben deleted the tweets, so it doesn't feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here> This makes me feel bad, and I'm going to try and articulate why. (This is mainly about my gut reaction to seeing/reading these tweets, but I'll ping @Benjamin_Todd because I think subtweeting/vagueposting is bad practice and I don't want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then. * At a gut-level, this feels like an influential member of the EA community deciding to 'defect' and leave when the going gets tough. It's like deciding to 'walk away from Omelas' when you had a role in the leadership of the city and benefitted from that position. In contrast, I think the right call is to stay and fight for EA ideas in the 'Third Wave' of EA. * Furthermore,if you do think that EA is about ideas, then I don't think dissassociating from the name of EA without changing your other actions is going to convince anyone about what you're doing by 'getting distance' from EA. Ben is a GWWC pledger, 80k founder, and is focusing his career on (existential?) threats from advanced AI. To do this and then deny being an EA feels disingenuous for ~most plausible definitions of EA to me. * Similar considerations to the above make me very pessimisitic about the 'just take the good parts and people from EA, rebrand the name, disavow the old name, continue operating as per usual' strategy to work at all * I also think that actions/statements like this make it more likely for the whole package of the EA ideas/community/brand/movement to slip into a negative spiral which ends up wasting its potential, and given my points above such a collapse would also seriously harm any attempt to get a 'totally not EA yeah we're definitely not those guys' m

Load more months