I think I'm sympathetic to Oxford's decision.
By the end, the line between genuine scientific inquiry and activistic 'research' got quite blurry at FHI. I don't think papers such as: 'Proposal for a New UK National Institute for Biological Security', belong in an academic institution, even if I agree with the conclusion.
One thing that stood out to me reading the comments on Reddit, was how much of the poor reception that could have been avoided with a little clearer communication.
For people such as MacAskill, who are deeply familiar with effective altruism, the question: "Why would SBF pretend to be an Effective Altruist if he was just looking to do fraud?" is quite the conundrum. Of all the types of altruism, why specifically pick EA as the vehicle to smuggle your reputation? EA was already unlikeable and elitist before the scandal. Why not donate to puppies and Ha...
I think I am misunderstanding the original question then?
I mean if you ask: "what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students"
then the reach is not the 10 million people watching the show, it's the people you get a chance to speak to.
Wasn't the Future Fund quite explicitly about longtermist projects?
I mean if you worked for an animal foundation and were in a call about give directly, I can understand that somebody might say: "Look we are an animal fund, global poverty is outside our scope".
Obviously saying "I don't care about poverty" or something sufficiently close that your counterpart remembers it as that, is not ideal, especially not when you're speaking to an ex-minister of the United Kingdom.
But before we get mad at those who ran the Future Fund, please consider there's much cont...
I'm working on an article about gene drives to eradicate malaria, and am looking for biology experts who can help me understand certain areas I'm finding confusing and fact check claims I feel unsure about.
If you are a masters or grad student in biology and would be interested in helping, I would be incredibly grateful.
An example of a question I've been trying to answer today:
How likely is successful crossbreeding between subspecies of Anopheles Gambiae (such as anopheles gambiae s.s. and anopheles arabiensis), and how likely is successful crossbreed...
a devastating argument, years of work wasted. Why oh why did I insist that the book's front cover had to be a snowman?
I think it's a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it's currently hard for any single organization to become more transparent without occurring enormous costs.
If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:
I think you are placing far too little faith in the power of the truth. None of the events you list above are bad. It's implied that they are bad because they will cause someone to unfairly judge Open Phil poorly. But why presume that more information will lead to worse judgment? It may lead to better judgment.
As an example, GiveWell publishes detailed cost-effectiveness spreadsheets and analyses, which definitely make me take their judgment way more seriously than I would otherwise. They also provide fertile ground for criticism (a popular recent magazine...
As a critic of many institutions and organizations in EA, I agree with the above dynamic and would like people to be less nitpicky about this kind of thing (and I tried to live up to that virtue by publishing my own quite rough grant evaluations in my old Long Term Future Fund writeups)
There's a lot of room between publishing more than ~1 paragraph and "publishing their internal analyses." I didn't read Vasco as suggesting publication of the full analyses.
Assertion 4 -- "The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits" -- seems to be doing a lot of work in your model here. But it seems to be based on assumptions about the nature and magnitude of errors that would be detected. If a number of errors were material (in the sense that correcting them would have changed the grant/no grant decision,...
Thanks for the thoughtful reply, Mathias!
I think it's a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it's currently hard for any single organization to become more transparent without occurring enormous costs.
I think this applies to organisations with uncertain funding, but not Open Philanthropy, which is essentially funded by a billionaire quite aligned with their strategy?
...The internal analyses from open phil I’ve been privileged to see were pretty good. They were also made by humans, who
I haven't seen the series, but am currently halfway through the second book.
I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?
But sure, if someone mentioned to me they watched and liked the series and they don't know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.
I think there's a huge difference in potential reach between a major TV series and a LessWrong post.
According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries.
Whereas a good LessWrong post might get 100 likes.
We should be more scope-sensitive about public impact!
Relevant to the discussion is a recently released book by Dirk-Jan Koch who was Chief Science Officer in the Dutch Foreign Ministry (which houses their development efforts). The book explores the second order effects of aid and their implications for an effective development assistance: Foreign Aid And Its Unintended Consequences.
In some ways, the arguments of needing to focus more on second-order effects are similar to the famous 'growth and the case against randomista development' forum post.
The west didn't become wealthy through marginal health interven...
just fyi Dean Karlan doesn't run USAID, he's Chief Economist. Samatha Power is the (chief) administrator of USAID.
I think Bryan Caplan is directionally correct, but his argumentation in this post is incredibly sloppy.
A marxist communist could make the exact same complaint as Bryan Caplan, but with the signs flipped. Why do all these economists focus on RCTs for educational interventions, and never once consider the best educational intervention is to rise up in violent revolution and overthrow our capitalist oppressors?
I don't recall any of the RCT papers I've read being particularly heavy on normative claims. Usually they'll just say:
"this intervention had a measurab...
Consider joining hackathons such as the ones organized by Apart Research. Anyone can join and get to work on problems directly related to AI Safety.
If you do a good project, you can put that on your resume and have something to speak about at your next interview.
I think there's at least two categories:
I'm more interested in what we can do to encourage the latter group. My impression is that many senior people are reluctant to post, as they don't have time to write something sufficiently well-argued and respond to the comments.
Instead many good discussions take place in signal groups, google docs and email threads. In a perfect world, these discussions would be in the forum. The issue rig...
Does Claude-3 push capabilities?
I think it can be a fun exercise is to just interpret CEOs statements literally and see what they imply.
If Dario Amodei claims they don't want to push capabilities, I think an interesting question to ask is in what sense releasing the world's best LLM isn't pushing capabilities.
One option that seems possible to me, could be that they no longer consider releasing improved LLMs to meaningfully push the frontier. If Claude-3 spurs OpenAI to push a quicker release of GPT-4.5, this would not be an issue as releasing ever more ref...
I thought the video was excellent, and the highlights of your article were the concrete ideas and examples of good communication.
More concrete ideas please! I don't think anyone will disagree that EA hasn't been the best at branding itself, but in my experience it's easier said than done!
Really cool experiment!
Was it possible to track to what extent the more engaging ads drove conversions? (donations made, pledges taken, etc.)
My hypothesis would be the more engaging ads get more people onto the website, but those people will be much less likely to follow through (and especially with significant amounts), than for example a very targeted and nerdy ad aimed at wealthy tech workers.
I think this leaves out what is perhaps the most important step in making a quality forecast, which is to consider the baserates!
Signal boosting my twitter poll, which I am very curious to have answered:
https://twitter.com/BondeKirk/status/1758884801954582990
Basically the question I'm trying to get at is whether having hands-on experience training LLMs (proxy for technical expertise) makes you more or less likely to take existential risks from AI seriously.
and even if they were solvent at the time, that does not mean they were not fraudulent.
If I took all my customers money, which I had promised to safekeep, and went to the nearest casino and put it all on red, even if I won it would still be fraud.
In conclusion, I think that rather than being overly focused on finding the most effective means of doing good, we should also be concerned with becoming more altruistic, caring and compassionate.
I strongly agree with the last half of this sentence. A rocket engine is only valuable insofar as it is pointed in the right direction. Similarly to how it makes sense to practice using spreadsheets to systematize ones decision-making, I think it make sense to think about ways to become more compassionate and kind.
We do not know how to make a PAI which does not kill literally everyone.
We don't know how to make a PAI that does kill literally everyone either. What would the world have to look like for you to be pro more AI research and development?
Just did it, still works. You can donate to what looks like any registered US charity, so plenty of highly effective options whether you care about poverty or animal welfare.
There's a few I know of:
Ah, today I learned! thanks for correcting that. For what it's worth I was vegan for two years, and have been vegetarian for 6.
Do you happen to know about the bioavailability claims of animal versus plant protein?
They literally don't. Animal proteins contain every essential amino acid, whereas any plant protein will only have a subset.
This is a common misconception!
I'm quite excited about cricket protein! Nutritionally it's superior to vegan protein supplements, especially for people who are otherwise vegan and won't get animal protein.
My intuition is that it very much comes down to whether one views an undisturbed cricket life as net-positive or negative. A cricket farm breeds millions of crickets in a 6 week cycle where the crickets are frozen to death not long before they naturally would die of old age.
Rethink Priorities recently incubated the insect institute who I think are exploring insect sentience. They're mo...
There’s nothing magical about “animal protein.” Plants and plant-based protein powders provide the same nutrients, minus the moral atrocity.
Insect sentience is debated, but I’m not sure why we’d take the risk when we can just go vegan.
I’m highly skeptical that farmed crickets would live “undisturbed” lives, given the historical track record of how animals are treated when we optimize their lives for meat production rather than their own welfare. Generally, we should treat sentient beings as an end in themselves, not as a means to an end.
Bravo! This really sets a bar for the quality of inquiry we should strive for in this community.
Forgive me for having the IQ of a shrimp, but could you spell out a concrete problem that the odyssean process could be used to solve?
ie:
problem: "People disagree over what colors the new metro line should be"
hypothetical process: "12 people sit in a room and hypothesize on color palettes. Those colour palettes are handed out to a panel of 100 randomly picked citizens to deliberate and then finally voted upon"
I skimmed through the report and am pretty confused as to what concretely the process is.
That's a really cool point, do share those sources!
Are there any studies on which calories get cut when people go on semaglutide? I imagine it's the empty carbs that would go before the beef, but maybe that's already calculated into the estimation?
The latest reports of CEARCH might be of interest to the new team:
Hypertension reduction through salt taxation:
https://drive.google.com/file/d/1R2ul47NtD-dJ7D7rcHFZ0z7h0JqcFxK_/view
Diabetes through sugar-soda tax:
https://drive.google.com/file/d/1UrYZUGbLn5LeTRVRZYdiY2EorsmXxQwR/view
Givedirectly goes into detail in this blogpost: https://www.givedirectly.org/drc-case-2023/
The founder of Givedirectly also the fraud case in this 80k podcast: https://open.spotify.com/episode/4yKwimUbdzPeg9MWTuJOoI?si=0eb1f2d942914963
For those who agree with this post (I at least agree with the author's claim if you replace most with more), I encourage you to think what you personally can do about it.
I think EAs are far too willing to donate to traditional global health charities, not due to them being the most impactful, but because they feel the best to donate to. When I give to AMF I know I'm a good person who had an impact! But this logic is exactly what EA was founded to avoid.
I can't speak for animal welfare organizations outside of EA, but at least for the ones that have come ou...
I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this.
<3 This is super awesome / inspirational, and I admire you for doing this!
Given it is the Giving Season, I'd be remiss not to point out that ACE currently has donation matching for their Recommended Charity Fund.
I am personally waiting to hear back from RC Forward on whether Canadian donations can also be made for said donation matching, but for American EAs at least, this seems like a great no-brainer opportunity to dip your feet in effective animal welfare giving.
For what it's worth, I think saving up runway is a no brainer.
During my one year as a tech consultant, I put aside half each month and donated another 10%. The runway I built made the decision for me to quit my job and pursue direct work much easier.
In the downtime between two career moves, it allowed me to spend my time pursuing whatever I wanted without worrying about how to pay the bills. This gave me time to research and write about snakebites, ultimately leading to Open Phil recommending a $500k investment into a company working on snakebite diagnosti...
Was about to write this! Deeply unserious that something of this poor quality can make it through peer review.
I've noticed a decrease in the quality and accuracy of communication among people and organizations advocating for pro-safety views in the AI policy space. More often than not, I'm seeing people go with the least charitable interpretations of various claims made by AI leaders.
Arguments are increasingly looking like soldiers to me.
Take the following twitter thread from Dr. Peter S. Clark describing his new paper co-authored with Max Tegmark.
The authors use game theory to justify a slew of normative claims that don't follow. The choice of language makes refu...
I'd be interested in hearing about why he believes in retributivism!
(he mentions being retributivist in this blogpost)
bugged out for me too, showed up when I tried editing the post, so just republished without any changes. seems to have fixed it
I did my bsc. in computer science so it's possible!
I joined a political party in my country, and started applying for jobs and internships. What got me my first was cold emailing the members of the European Parliament in my party, they put a good word in among the dozens of other people who applied through the official forms.
also check out this book series:
https://www.routledge.com/Rethinking-Development/book-series/RDVPT
The minute suffering I experience from the cold is not the real cost!
I'm probably an outlier, given that a lot of my work is networking but I have had to cancel attending an event where I was invited to speak and likely would have met at least a few people who would have been relevant to know for my work, canceled an in-person meeting (though I likely will get a chance to meet them later) and reschedule a third.
The cold probably hit at the best possible time (right after two meetings in parliament), had it come sooner it would have really sucked.
Additional...
Why is it that I must return from 100% of EAGs with either covid or a cold?
Perhaps my immune system just sucks or it's impossible to avoid due to asymptomatic cases, but in case it's not: If you get a cold before an EAG(x), stay home!
For those who do this already, thank you!
I'm awestruck, that is an incredible track record. Thanks for taking the time to write this out.
These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.