The banner will only be visible on desktop. If you can't see it, try expanding your window.
If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone.
Anything that qualifies as good ...
I assume the argument is that neurotic people suffer more when they don't get resources, so resources should go to more neurotic people first?
I think that's correct in an abstract sense but wrong in practice for at least two reasons:
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here!
Ticket discounts are available and we have limited travel support.
Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20).
Applicants from elsewhere are encouraged to apply but the bar for getting in is much higher.
Our team at Epoch recently updated the org's website.
I'd be curious to receive feedback if anyone has any!
What do you like about the design? What do you dislike?
How can we make it more useful for you?
Does requiring ex-ante Pareto superiority incentivise information suppression?
Assume I emit x kg of carbon dioxide. Later on, I donate to offset 2x kg of carbon dioxide emissions. The combination of these two actions seems to make everyone better off in expectation. It’s ex-ante Pareto superior. Even though we know that my act of emitting carbon and offsetting it will cause the deaths of different individuals due to different extreme weather events compared to not emitting at all, climate scientists report that higher carbon emissions will make the s...
Interesting!
Fleurbaey and Voorhoeve wrote a related paper: https://doi.org/10.1093/acprof:oso/9780199931392.003.0009
FWIW, GPT said the greenhouse effect is not stronger locally to the emissions. So, I would guess that if you can offset and emit the same kind of greenhouse gas molecules roughly simultaneously, it would be very unlikely we'd be able to predict which regions are made worse off by this than neither emitting nor offsetting.
I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to t...
With another EAG nearby, I thought now would be a good time to push out this draft-y note. I'm sure I'm missing a mountain of nuance, but I stand by the main messages:
I think there are two things EAs could be doing more of, on the margin. They are cheap, easy, and have the potential to unlock value in unsuspecting ways.
I say this 15 times a week. It's the most no-brainer thing I can think of, with a ridiculously low barrier to entry; it's usually net-positive for one while often only drawing on unproductive hours of t...
One habit to make that second piece of advice stick even more that I often recommend: introduce people to other people as soon as you think of it (i.e. pause the conversation and send them an email address or list of names or open a thread between the two people).
I often pause 1:1s to find links or send someone a message because I'm prone to forgetting to do follow-up actions unless I immediately do it (or write it down).
I've recently made an update to our Announcement on the future of Wytham Abbey, saying that since this announcement, we have decided that we will use some of the proceeds on Effective Venture's general costs.
Mobius (the Bay Area-based family foundation where I work) is exploring new ways to remove animals from the food system. We're looking for a part-time Program Manager to help get more talented people who are knowledgable about farmed animal welfare and/or alternative proteins into US government roles. This entrepreneurial generalist would pilot a 3-6 month program to support promising students and early graduates with applying to and securing entry-level Congressional roles. We think success here could significantly improve thoughtful policymaking on farme...
Not sure how to post these two thoughts so I might as well combine them.
In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.
However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:
I'm surprised by the disagree votes. Is this because people think I'm saying, 'in the case of whether it is ever OK to take a harmful job in order to do more good, one ought not to say what one truly believes'?
To clarify, that's not what I'm trying to say. I'm saying we should have nuanced thoughts about whether it is ever OK to take a harmful job in order to do more good, and we should make sure we're expressing those thoughts in a nuanced fashion (similar to the 80k article I linked). If you disagree with this I'd be very interested in hearing your reasoning!
In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I'm still quite confused about why many people seem to disagree with the view I expressed, and I'm interested in engaging more to get a better understanding of their perspective.
At the least, I thought I'd write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective.
The core thesis that was trying to defend is the following view:
My view: It...
Perceived counter-argument:
My proposed counter-argument loosely based on the structure of yours.
I am concerned about the H5N1 situation in dairy cows and have written and overview document to which I occasionally add new learnings (new to me or new to world). I also set up a WhatsApp community that anyone is welcome to join for discussion & sharing news.
In brief:
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable.
I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves ...
Something I'm confused about: what is the threshold that needs meeting for the majority of people in the EA community to say something like "it would be better if EAs didn't work at OpenAI"?
Imagining the following hypothetical scenarios over 2024/25, I can't predict confidently whether they'd individually cause that response within EA?
Going to quickly share that I'm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while I'll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think it'd be better for me to move that focus to posts rather than comments for a bit.[1]
If you do want to get in touch about anything, please reach out and I'll try my very best...
(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK?
I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could be: workshops, conferences, unconferences, retreats, summer schools, coding/data science bootcamps, EtG accelerators, EA charity accelerators, intro to EA bootcamps, AI Safety bootcamps, etc.
This would be next door to CEEALAR (the building is potentially coming on the market), but mos...
What is the best practice for dealing with biased sources? For example, if I'm writing an article critical of EA and cite a claim made by emille torres, would it be misleading to not mention that they have an axe to grind?