All posts

New & upvoted

Friday, 31 December 2021
Fri, 31 Dec 2021

Forecasting 1
Prediction markets 1
Corporate governance 1
Personal development 1
Productivity 1
Global health & development 1
More

Quick takes

I should write up an update about the Decade Review. Do you have questions about it? I can answer them here, but it will also help me inform the update.

Thursday, 30 December 2021
Thu, 30 Dec 2021

Project voting 4
Cause prioritization 3
Less-discussed causes 2
Global health & development 2
Policy 1
Metascience 1
More

Frontpage Posts

Quick takes

"it is unclear whether" can sometimes mean "I am skeptical that" or "I don't think". It annoys me when people use it this way. Unclear already has a good and useful meaning. We shouldn't dilute it. The proper use of "unclear" is a sentence like this: "it's still unclear if the intervention worked". A quick heuristic: if the use of "unclear" is or could be prefixed by "still" without changing the meaning, it is probably ok :). Another way to view it -- if more information is likely to come out soon, then it's probably ok. Some examples of usages of "unclear" I'd like to see less of: * "How do you think the willingness of key actors such as governments to tackle bio risks will change...? It's unclear whether we will see the right levels of political competence and focused engagement..." 1 * "It's unclear how significant the extrinsic, welfare-oriented value of biodiversity even is" 2 * "it is unclear how OpenPhil are comparing different causes, rather than looking out for giving opportunities across a variety of causes" 3

Wednesday, 29 December 2021
Wed, 29 Dec 2021

Building effective altruism 5
Forecasting 2
Community infrastructure 2
Community projects 2
Effective altruism in the media 2
Community 1
More

Frontpage Posts

Quick takes

17
Linch
2y
4
I think many individual EAs should spend some time brainstorming and considering ways they can be really ambitious, eg come up with concrete plans to generate >100M in moral value, reduce existential risk by more than a basis point, etc. Likewise, I think we as a community should figure out better ways to help people ideate and incubate such projects and ambitious career directions, as well as aim to become a community that can really help people both celebrate successes and to mitigate the individual costs/risks of having very ambitious plans fail.
Negative thoughts on a proposal for a database of project ideas Written: Jun 3, 2021. Epistemic status: See NegativeNuno's profile. Hey [person], Essentially, I think that it is quite likely that this will fail (70% as a made up number with less than 15 mins of thought; I'm thinking of these as "1 star predictions"); I don't think that the "if I build it they will come" theory of change is likely to work. In particular, I would be fairly surprised if (amount of time people spent working of stuff from your database) / (amount of time you spend creating that database) was greater than 1. Other commenters on both google docs seemed to share this perspective, but maybe the project is worth going through with anyways if you feel that the value of information is high enough.  Also, your MVP proposal is too large. A google sheet with much fewer rows could serve as a decent MVP, and it would take much less time to set-up and test. It's also not clear why an MVP should be central, or even all that large.  Part of my pessimism is that I tried a variant of this (more centered around the forecasting part). You can find the google doc which I used for part of the project here: https://docs.google.com/spreadsheets/d/1YHaO-vmjrfM6xbLwa1ljd15KHuCelS03saV81kKYvW4/edit?usp=sharing. Note that this is more for small projects, rather than for research projects. Also note that I did do the obvious thing of first getting the volunteers and then investing the time to gather the projects and estimate their impact. My ~10 volunteers flaked (except one, who is probably going to be writing her masters thesis on the project I assigned to her, so I don't consider this to be a total waste of time [update: she flaked as well]). I also tried to get a research group going on in Austria, without much success. Part of this was that I underestimated the difficulty of finding volunteers to carry out small projects, and that I overestimated their potential commitment.   If I were doing this differently, and this is something you might want to explore yourself, I would first get a **strong** commitment from an already existing cohesive group of people to spend a certain number of hours on a project I decide on. For instance, you could talk with Edo, or with some local EA group leaders about starting a local research group, and initially create your database for that group, and then expand the project to be a central repository for all EAs. For this, your MVP doesn't have to be particularly elaborate; the research projects you propose just have to be better than whatever your volunteers would have otherwise done. On the other hand, it's possible that the initial publicity of it being a "central EA repository" and perhaps being mentioned in e.g., the EA newsletter might be enough to get the ball rolling, which is something that I didn't try. So that's a  judgment call you have to make.   To elaborate on this, an instructive EA forum post to which I keep coming back is Jan Kulveit's What to do with people?. He proposes a "hierarchical networked structure", in which e.g., city EA groups are coordinated by a national EA leadership, which would be coordinated by e.g. regional offices (a la JPAL), which would be coordinated by a central brain. Instead, your central repository, as you currently describe it, would have a pretty decentralized structure (anybody can search it, anybody could edit it (after some quality filtering)), which has its pros and cons. So there is a judgment call to make between doing your project in a more decentralized way (forum post + EA newsletter) or in a more hierarchical way (coordinating with an already existing local group).   The above feels somewhat unedited and stream of consciousness; let me know if something doesn't sound right or if you have some different models somewhere, or if I've misunderstood something. Best, --
Epistemic status: I feel positive about this, but note I'm kinda biased (I know a few of the people involved, work directly with Nuno, who was funded) ACX Grants just announced.~$1.5 Million, from a few donors that included Vitalik. https://astralcodexten.substack.com/p/acx-grants-results Quick thoughts: * In comparison to the LTFF, I think the average grant is more generically exciting, but less effective altruist focused. (As expected) * Lots of tiny grants (<$10k), $150k is the largest one. * These rapid grant programs really seem great and I look forward to them being scaled up. * That said, the next big bottleneck (which is already a bottleneck) is funding for established groups. These rapid grants get things off the ground, but many will need long-standing support and scale. * Scott seems to have done a pretty strong job researching these groups, and also has had access to a good network of advisors. I guess it's no surprise; he seems really good at "doing a lot of reading and writing", and he has an established peer group now. * I'm really curious how/if these projects will be monitored. At some point, I think more personnel would be valuable. * This grant program is kind of a way to "scale up" Astral Codex Ten. Like, instead of hiring people directly, he can fund them this way. * I'm curious if he can scale up 10x or 1000x, we could really use more strong/trusted grantmakers. It's especially promising if he gets non-EA money. :) On specific grants: * A few forecasters got grants, including $10k for Nuño Sempere Lopez Hidalgo for work on Metaforecast. $5k for Nathan Young to write forecasting questions. * $17.5k for 1DaySooner/Rethink Priorities to do surveys to advance human challenge trials. * $40k seed money to Spencer Greenberg to "produce rapid replications of high-impact social science papers". Seems neat, I'm curious how far $40k alone could go though. * A bunch of biosafety grants. I like this topic, seems tractable. * $40k for land value tax work. * $20k for a "Chaotic Evil" prediction market. This will be interesting to watch, hopefully won't cause net harm. * $50k for the Good Science Project, to "improve science funding in the US". I think science funding globally is really broken, so this warms my heart. *  Lots of other neat things, I suggest just reading directly.

Tuesday, 28 December 2021
Tue, 28 Dec 2021

Community 3
Building effective altruism 3
Existential risk 1
Philosophy 1
Criticism of effective altruism culture 1
Criticism of effective altruism 1
More

Quick takes

I've made a small "Collection of collections of AI policy ideas" doc. Please let me know if you know of a collection of relatively concrete policy ideas relevant to improving long-term/extreme outcomes from AI. Please also let me know if you think I should share the doc / more info with you. 

Topic Page Edits and Discussion

Monday, 27 December 2021
Mon, 27 Dec 2021

AI safety 2
Forecasting 2
AI alignment 2
Community 2
Building effective altruism 2
AI risk skepticism 1
More

Topic Page Edits and Discussion

Sunday, 26 December 2021
Sun, 26 Dec 2021

Creative Writing Contest 3
Effective altruism art and fiction 2
Motivational 2
Global health & development 2
Forecasting 2
Policy 2
More

Frontpage Posts

Quick takes

6
quinn
2y
2
Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable? Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!) It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you have fuel source indifference! I mean if you look at all the money they had to spend on disinformation and lobbying, isn't it insultingly obvious to say "just invest that money into renewable research and markets instead"? Is there dialogue on this? Also, have any members of "big oil" in fact done what I'm suggesting, and I just didn't hear about it? CC'd to lesswrong shortform
LessWrong is experimenting with a two-axis voting system: > The two dimensions are: > > * Overall (left and right arrows): what is your overall feeling about the comment? Does it contribute positively to the conversation? Do you want to see more comments like this? > * Agreement (check and cross): do you agree with the position of this comment? I think it could be interesting to test out that or a similar voting system on the EA Forum as well.

Saturday, 25 December 2021
Sat, 25 Dec 2021

Effective altruism art and fiction 2
Building effective altruism 2
Global health & development 2
Effective Altruism for Christians 1
Long-term future 1
Cash transfers 1
More

Friday, 24 December 2021
Fri, 24 Dec 2021

Building effective altruism 1
Community 1
Forecasting 1
Policy 1
Effective Altruism Funds 1
Effective Altruism Infrastructure Fund 1
More

Quick takes

[Summary: Most people would probably agree that science benefited greatly from the shift to structured, rigorous empirical analyses over the past century, but some fields still struggle to make progress.  I’m curious whether people think that we could/should seek to introduce more structure/sophistication to the way researchers make and engage with theoretical analyses, such as something like "epistemic mapping"] I just discovered this post, and I was struck by how it echoed some of my independent thoughts and impressions, especially the quote: "But it should temper our enthusiasm about how many insights we can glean by getting some data and doing something sciency to it." (What follows is shortform-level caveating and overcomplicating, which is to say, less than I normally would provide, and more about conveying the overall idea/impression) I've had some (perhaps hedgehoggy) "big ideas" about the potential value of what I call "epistemic mapping" for advancing scientific study/inquiry/debate in a variety of fields. One of them relates to the quote above: the "empirical-scientific revolution" of the past ~100-200 years (e.g., the shift to measuring medical treatment effectiveness through inpatient/outpatient data rather than professionals’ impressions) seems to have been crucial in the advancement of a variety of fields.  However, there are still many fields where such empirical/data-heavy methods appear insufficient and where it seems like progress languishes: my impression has been that this especially includes many of the social sciences (e.g., conflict studies, political science, sociology). There are no doubt many  possible explanations, but over time I've increasingly wondered whether a major set of problems is loosely that the overall complexity of the systems (e.g., human decision making process vs. gravitational constants) + the difficulty of collecting sufficient data for empirical analyses +  (a few other factors) leads to a situation of high information lossage between researchers/studies and/or people are incentivized to oversimplify things (e.g., following the elsewhere-effective pattern of regression analyses and p<0.05 = paper). I do not know, but if the answer is yes, that leads to a major question: How could/should we attempt to solve or mitigate this problem? One of the (hedgehoggy?) questions that keeps bugging me: We have made enormous advances in the past few hundred years when it comes to empirical analyses; in comparison, it seems that we have only fractionally improved the way we do our theoretical analysis... could/should we be doing better? [Very interested to get people's thoughts about that overall characterization, which even I'll admit I'm uncertain about]  So, I'm curious if people share similar sentiment about our ability/need to improve our methods of theoretical analysis, including how people engage with the broader literature aside from the traditional (and, IMO, inefficient) paragraph-based literature reviews. If people do share similar sentiment, what do you think about that concept of epistemic mapping as a potential way of advancing some sciences forward? Could it be the key to efficient future progress in some fields? My base rates for such a claim are really low, and I recognize that I'm biased, but I feel like it's worth posing the question if only to see if it advances the conversation.  (I might make this into an official post if people display enough interest)  

Thursday, 23 December 2021
Thu, 23 Dec 2021

AI safety 3
Artificial intelligence 2
Cause prioritization 1
Effective giving 1
Centre for the Study of Existential Risk 1
Charity evaluation 1
More

Quick takes

Netherlands has just released a £21 billion plan to reduce the number of cows, chickens and pigs farmed in the Netherlands by 1/3rd (approximately 30 million animals) due to excess livestock emissions. The article linked below also says there is similar concerns around ammonia/nitrogen pollution in Germany, Belgium and Denmark which *might* lead them to consider similar action.   Seems really great on the first glance but a few questions I have (in case anyone knows more about this or can speculate reasonably): * Will these animals be displaced to be grown somewhere else instead or is this due to some net reduction in demand in the Netherlands / Europe? * Will enough farmers exit / reduce intensive farming enough voluntarily to have the desired impact? * If not, will they try to make this mandatory? The farming lobby seems strong in most places, including Netherlands, but it seems they've done pretty well to get it this far * How easily could we replicate this in other countries? Seems like the UK is also on-track to overshoot ammonia emissions by 20%, amongst the other countries named above
Merry Christmas! I hope you all have great holidays, and are able to draw inspiration from them, even if Christmas presents are often an example of the most inefficient altruism there is. 

Topic Page Edits and Discussion

Load more days