All of MathiasKB's Comments + Replies

But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a --- criminal--- thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my communi

... (read more)
1GoodEAGoneBad6d
I didn't notice this comment and I think it's excellent. Thank you so much for sharing.

Thank you for writing this. It's barely been a week, take your time.

There's been a ton of posts on the forum about various failures, preventative measures, and more. As much as we all want to get to the bottom of this and ensure nothing like this ever happens again, I don't think our community benefits from hasty overcorrections. While many of the points made are undoubtedly good, I don't think it will hurt the EA community much to wait a month or two before demanding any drastic measures.

EA's should probably still be ambitious. Adopting rigorous governanc... (read more)

7RobBensinger11d
I like this comment.

Exactly. Let's take some time to reflect and update slowly as needed. First I didn't want to engage at all and saw this as a distraction. Now I'm much more up to date and see it as an opportunity to learn useful lessons about EA and myself as a person so deeply connected with this movement.

Bit of a wake up call to focus on priorities and not get distracted by the drama, while taking seriously what is happening and keeping accountability within the movement. It's pretty clear something will need to change. Unclear what exactly it will be. My guess that it w... (read more)

Good question, I've created a manifold market for this:

4Jordan Arel13d
Thank you! This is very useful input.

I wouldn't conclude much from the future fund withholding funds for now. Even if they are likely in the clear, freezing payments until they have made absolutely sure strikes me as a very reasonable thing to do.

4Markus Amalthea Magnuson16d
That a situation where they are not "absolutely sure" can even occur is one of the major causes of worry here, regardless of the conclusions that can be drawn at this point.

My only worry will be that there will be more things posted in a short time than anyone will have time to read. I'm still working my way through all the cause area reports. Some system to distribute the posts out to prevent fatigue might be warranted for events like these and future writing contests.

You can only spend your resources once. Unless you argue that there is a free lunch somewhere, any money and time spent by UN inevitably has to come from somewhere else. Arguing that longtermist concerns should be prioritized necessarily requires arguing that other concerns should be de-prioritized.

If EA's or the UN argue that longtermism should be a priority, it's reasonable for the authors to question from where those resources are going to come.

For what it's worth I think it's a no-brainer that the UN should spend more energy on ensuring the future goes... (read more)

To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism's conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues.

If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it's from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest.

This might be the correct call, but I think it's a reasonable thing to disagree with.

1Jeff A2mo
They understand the case for longtermism but don’t understand the proposals or solutions for longtermism aspirations. One of the UN’s main goals is sustainable development. You can still address today’s issues while having these solutions have the future in consideration. Therefore, you don’t have to spend most funds only addressing the long term future. You can tackle both problems simultaneously.

Thank you, this is an excellent post. This style of transparent writing can often come across as very 'ea' and gets made fun of for its idiosyncrasies, but I think it's a tremendous strength of our community.
 

5Guy Raveh2mo
I think it's sometimes a strength and sometimes a weakness. It's useful for communicating certain kinds of ideas, and not others. Contrary to Lizka, I personally wouldn't want to see it as part of the core values of EA, but just as one available tool.

I would advise you to shorten the length of the total application to around one fourth of what it is currently. Focus on your strong points (running a growing business, strong animal welfare profile) and leave out the rest. The weaker parts of your application water down the parts which are the strongest.

Admissions are always a messy process and good people get rejected often. A friend of mine who I'm sure will go on to become a top-tier ai safety engineer, got rejected from eag, because there isn't a great way to convey this information through an application form. Vetting people at scale is just really difficult.

1Constance Li2mo
Thanks. I think I erred on the side of providing more information than needed to show enthusiasm and commitment. Perhaps I’ll try your suggestion next time and hope for a different result.

Thanks for writing this Jonas. As someone much below the lesswrong average at math, I would be grateful for a clarification of this sentence:

Provided  and  are independent when 

What does  and  refer to here? Moreover is it a reasonable assumption, that the uncertainties of existential risks are independent? It seems to me that many uncertainties run across risk types, such as chance of recovery after civilisations collapse.

3Jonas Moss3mo
j and k are indices for the causes. I wrote j≠k because you don't have to assume that dk and pk are independent for the math to work. But everything else will have to be independent. Maybe the uncertainties shouldn't be independent, but often they will. Our uncertainty about the probability of AI doom is probably not related to our uncertainty about the probability of pandemics doom, for instance.

For anyone interested in pursuing this further Charity Entrepreneurship is looking to incubate a charity working on road traffic safety.

Their report on the topic can be found here: https://www.charityentrepreneurship.com/research

Thanks for giving everyone the opportunity to provide feedback!

I'm unsure how I feel about the section on global poverty and wellbeing. As of now, the section mostly just makes the same claim over and over that some charities are more effective than others, without much rigorous discussion around why that might be.

There's a ton of great material under the final 'differences in impact' post that I would love to see as part of the main sequence. Right now, I'm worried that people new to global health and development will leave this section feeling waay overc... (read more)

Words cannot express how much I appreciate your presence Nuno.

Sorry for being off-topic but I just can't help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.

That puts EA in an even better light!

"While the rest of the global health community imposes its values on how trade-offs should be made, the most prominent global health organisation in EA actually surveys and asks what the recipients prefer."

[This comment is no longer endorsed by its author]Reply
4Karthik Tadepalli3mo
That's also simply not true because EAs use off-the-shelf DALY/QALY estimates from other organizations all the time. And this is only about health vs income tradeoffs, not health measurement, which is what QALY/DALY estimates actually do. Edit: as a concrete example, Open Phil's South Asian air quality report [https://www.openphilanthropy.org/research/south-asian-air-quality/] takes its DALY estimates from the State of Global Air report, which is not based on any beneficiary surveys.

I think the meta-point might be the crux of our disagreement.

I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and I'm often a bit perplexed as to how quick people are to jump from 'nearly everyone dies' to 'literally everyone dies'. Similarly I'm sympathetic to the point that it's difficult to imagine particularly compelling scenarios where AI doesn't radically alter the world in some way.

But we should be immensely uncertain about the assumptions we make and I would argue the far most likely fi... (read more)

for example:

  • intelligence peaks more closely to humans, and super intelligence doesn't yield significant increases to growth.
  • superintelligence in one domain doesn't yield superintelligence in others, leading to some, but limited growth, like most other technologies.
  • we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.
1simeon_c4mo
1. intelligence peaks more closely to humans, and super intelligence doesn't yield significant increases to growth. Even if you have a human-ish intelligence, most of the advantage of AI from its other features: - You can process any type of data, orders of magnitude faster than human and once you know how to do a task your deterministically know how to do it. - You can just double the amount of GPUs and double the number of AIs. If you pair two AIs and make them interact at high speed, it's much more power than anything human-ish. These are two of the many features that make AI radically different and make that it will shape the future. 2. superintelligence in one domain doesn't yield superintelligence in others, leading to some, but limited growth, like most other technologies. That's very (very) unlikely given recent observations on transformers where you just take some models trained from text and plug it on images, train a tiny bit more (compared with the initial term) and it works + the fact that it does maths + the fact that it's more and more sample efficient. 3. we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence. I think that's the most plausible of all three claims but I still think it's like btwn 0.1% and 1% likely. Whereas we've a pretty clear path in mind on how to reach AIs that are powerful enough to change the world, we've no idea how to build EMs. Also, this doesn't change directionally my argument bc no one in the EA community works on EMs. If you think that EMs are likely to change the world and that EAs should work on it, you should probably write on it and make the case for it. But I think that it's unlikely that EMs are a significant thing we should care about rn. If you have other examples, I'm happy to consider them but I suspect you don't have better examples than those. Meta-point: I think that you should be more inside viewy when considering claims. "Engineers ca

AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either:

  1. The superintelligence is misaligned and it kills us all
  2. The superintelligence is misaligned with our own objectives but is benign
  3. The superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about.

 

I think there are many more options than this, and every argument that follows banks entirely your logical models being correct. Engineers can barely design a bike that wil... (read more)

1simeon_c4mo
I think that this comment is way too outside viewy. Could you mention concretely one of the "many options" that would change directionally the conclusion of the post?

Similarly, once you introduce a “reliable predictor”, everything goes out the window and the money is the least of your concern. But granting the premise, fine, I One Box

 

Correct me if I'm wrong, but doesn't the experiment just need a predictor who does better than random? The oracle could just just your good friend who knows your tendencies better than random,  as long as box b's payout is high enough.

2Zach Stein-Perlman4mo
I think the crux is how the oracle makes predictions? (Assuming it's sufficiently better than random; if it's 50.01% accurate and the difference between the boxes is a factor of 1000 then of course you should just 2-box.) For example, if it just reads your DNA and predicts based on that, you should 1-box evidentially or 2-box causally. If it simulates you such that whichever choice you make, it would probably predict that you would make that choice, then you should 1-box. It's not obvious without more detail how "your good friend" makes their prediction.

Many of the comments in this video were incredibly inspiring for me to read.

Most outside engagement with EA I see tends to scew heavily negative. I tend to disprortionally focus on the negative as those impressions are especially important for me to understand, so reading comments claiming how inspired they felt by the prospects of humanity, was so refreshing! Where was the all the sneering I'm so used to!?

4WilliamKiely5mo
Note that if you sort the comments by New rather than Top, a smaller fraction say very positive things.

The right framing can make a big difference.

Also I see lots of positive comments from people who have done the Intro EA virtual program or who have read Doing Good Better.

It may also be a difference between how YouTube comments are quite positive these days and Twitter still has incentives for negative comments.

Thanks for the comment, I'd like to know that as well!

Since writing the article and diving further into the antivenom crisis, I think I've actually doubled down on cost of treatment being the primary issue.

When faced with the following options:

1. long trip to clinic, expensive treatment that may not work.
2. short trip to local healer, inexpensive treatment that may not work

I can understand why someone would opt for the latter.

My model would be that people would become much more willing to go to the hospital for , when they see acqaintance after acqaintance... (read more)

it's not obvious that undifferentiated scientific progress is net bad either. Scientific progress increases our wealth and allows for us to spend a larger fraction on safety than we otherwise would. I'd much prefer to live in a world where we can afford both nukes and safety measures, than the world in which we only could afford the nukes.

Scientific progress  has been the root of so much progress, I think we should have a strong prior that more of it is good!

1acylhalide6mo
Ignoring inside views on specific topics, yes we should have that prior. But having inside views on both AI risk and stable totalitarianism (without use of AGI), I'm personally leaning towards net negative currently. Safety work on either AI risk or biorisk or stable totalitarianism doesn't seem as limited by the wealth civilisation as a whole has, as it does by the number of people who agree and care enough direct funds, attention or energy to such causes.
9Linch6mo
See discussion here [https://forum.effectivealtruism.org/posts/PrPyq5wpqnfua496D/what-s-your-prior-probability-that-good-things-are-good-for] .

I absolutely think we should stick to that messaging. Trying to do the the most good, rather than some good is the core of our movement. I would point out that there are also many doctors who were not discouraged and chose to change their career entirely as a result of EA. I personally know a few who ended up working on the very things you encourage!

That said we should of course be careful when discouraging interventions if we haven't looked into the details of each cost-effectiveness analysis, as it's easy to arrive at a lower looking impact simply due to methodological differences between Givewell's cost-effectiveness analysis and yours.

3Akhil6mo
Let's separate this out 1. There are some medics who completely buy EA and have changed their entire career directly in line with EA philosophy 2. There are some medics who are looking to increase and maximise the impact of their careers, but who aren't sold on all or aspects of EA. They also may have a particular cause area preference e.g. global medical education, that isn't thought of as a high impact cause area by EAs I think our philosophy is to work with both of these groups, rather than just (1).[1] [#fn7ecxgydgkls]I think the way we do that is by acknowledging that EA is fundamentally a question [https://forum.effectivealtruism.org/posts/FpjQMYQmS3rWewZ83/effective-altruism-is-a-question-not-an-ideology] ; we talk through EA ideology and frameworks without being prescriptive about the 'answers' and conclusions of what people should work on. I think that this recent summary on a post on the forum [https://forum.effectivealtruism.org/posts/SjK9mzSkWQttykKu6/big-tent-effective-altruism-is-very-important-particularly?commentId=mrr28jX6hNWL3fXyN#mrr28jX6hNWL3fXyN] is quite helpful here 1. ^ [#fnref7ecxgydgkls]We do fundamentally serve (1) and think this is a great group of people we shouldnt miss either!

Giving what we can did a member profiles series, not sure if that was what you were thinking of

1elteerkers6mo
Oh maybe that's it! Thank you I'll look it up
3Luke Freeman6mo
We generally release at least 1 per month on our blog [givingwhatwecan.org/blog] and social media. We have a (needs to be updated) collection of some here: https://www.givingwhatwecan.org/case-studies-people-who-pledge-to-give/ [https://www.givingwhatwecan.org/case-studies-people-who-pledge-to-give/]

It was nice to read something that was both well-written and well-intentioned!

I don't agree with the proposed alternative to longtermism of 'ineffective altruism' eschewing metrics in favour of doing things what intuitively feels right.  If you disagree with longtermism the natural conclusion to me intuitively seems to be doubling down on high empirical standards and measurability.

On a slightly uncharitable side note, something I find amusing is that it's not long ago we were getting criticised for being overly obsessed with only what could be measure... (read more)

On a slightly uncharitable side note, something I find amusing is that it's not long ago we were getting criticised for being overly obsessed with only what could be measured, and that we should be more open to the value of systemic change and such

To be charitable to EA's detractors, it's very possible these are criticisms coming from different people! Some people will be more worried about measurable outcomes, others about systemic change. If EA is getting both kinds of criticisms then it's probably doing better than if it's only getting one type!

Since writing this article, this is actually one of the things I've been looking into! I think it looks very promising, as many of the issues outlined by WHO seem downstream from people simply being unable to afford high quality antivenom. (ie. why do people choose local healers? Because hospitals cost more and don't help either!)

It also looks like the marginal cost of high quality antivenom would decrease up to an order of magnitude if you scale up production. I have yet to take an in depth look at synthetic antivenom production, but after briefly looking into it, it seems that we are not going to get synthetic antivenom just yet.

Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power.


I largely agree with this, but I think it's important to keep in mind that "grifter" is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA's the one most prone to "grifting" would end up with more influence than the rest.

What makes this so difficult is that ... (read more)

if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people's minds than if I were to spend that time writing on the forum.

I also don't know whether this is true, but the general idea that talking to people in person individually would be more persuasive than over text isn't surprising. There's a lower barrier to ideas flowing, you can better see how the other person is responding, and you don't have consider how people not in the conversation might misinterpret you.

the longtermist entrepreneurship incubator still seems like a promising project to me, though difficult to execute.

man you just blew my mind, will give it a try next time I feel an urge to play around with GPT!

If the comments include a prediction my guess is that GPT would often make the same prediction and thus become much more accurate. Not because it learned to predict things but because there's probably a strong correlation between the community prediction and the most upvoted comments prediction.

If the goal is to give GPT more context than just the title of the question, then you could include the descriptions for each question as well, but when I tried this I got worse results (fewer legible predictions).

Open philanthropy is not the only grantmaker in the EA Space! If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made  a large shift towards longtermism, primarily due to the Future fund being so massive.

I also want to emphasize that many central EA Organisations are increasingly focused on longtermist concerns, and not as transparent about it as I would like for them to be. People and organisations should not pretend to care about things they do not for the sake of optics. One of EA's most central tenets i... (read more)

If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made  a large shift towards longtermism, primarily due to the Future fund being so massive.

I think starting in 2022 this will be true in aggregate – as you say largely because of the FTX Future Fund.

However, for EA Funds specifically, it might be worth keeping in mind that the Global Health and Development Fund has been the largest of the four funds by payout amount, and by received donations even is about as big as all other funds combined.

3Harrison Durland7mo
In my view, there is some defining tension in rationalist and EA thought regarding epistemic vs. instrumental emphasis on truth: adopting a mindset of rationality/honesty is probably a good mindset—especially to challenge biases and set community standards—but it’s ultimately for instrumental purposes (although, for instrumental purposes, it might be better to think of your mindset as one of honesty/rationality, recursivity problems aside). I don’t think there is much conflict at the level of “lie about what you support”: that’s obviously going to be bad overall. But there are valid questions at the level of “how straightforward/consistent should I be about the way all near-termist cause areas/effects pale in comparison to expected value from existential risk reduction?” It might be the case that it’s obvious that certain health and development causes fail to compare on a long-term scale, but that doesn’t mean heavily emphasizing that is necessarily a good idea, for community health and other reasons like you mention.

They are currently explicitly writing on the page I linked that they are not.

Can I apply for funding to the Global Health and Development Fund?

The Global Health and Development Fund is not currently accepting applications for funding.

If that is not the case, I'm not too happy with their communication!

 

EDIT: whoops, didn't see Lorenzo's comment

[This comment is no longer endorsed by its author]Reply

The page linked in my comment states that they are not currently accepting unsolicited proposals, but I agree the FAQ makes it sound like they are open to being contacted. My guess is there probably isn't a clear cut policy and that they just want to avoid setting an expectation that they will evaluate everything sent their way.

Will send them a message, thank you  :)

When I first learned about the diagnostics startup, my immediate thought was that some EA Fund would be interested in further evaluating it.  Unfortunately neither Open Philanthropy, EA Funds, or FTX Community are currently accepting unsolicited proposals.

The primary reason I wrote this post was to get the attention of fund-managers, and hopefully get someone to figure out if this is impactful and fund it if it is.

2Peter Wildeford7mo
EA Funds definitely accepts unsolicited proposals! That's the whole point of it!
3Jorgen_Ljones7mo
Aren't OpenPhil? https://www.openphilanthropy.org/giving/how-to-apply-for-funding#Can_I_apply_for_a_grant [https://www.openphilanthropy.org/giving/how-to-apply-for-funding#Can_I_apply_for_a_grant] They specify that they have low expectations for unsolicited proposals, but it's possible to contact them about it.

I wondered about this as well. There's no doubt that it would reduce snakebites, but whether it's cost-effective is more difficult to tell.

An analyst I spoke to pointed out to me that after all it's still pretty rare to be bitten by a snake. The amount of footwear you'd need to distribute per snakebite prevented is pretty high, and likely pretty expensive.

1Peter S. Park7mo
That makes sense! Shoes are probably more expensive than malaria nets. But it might still be a better intervention point than antivenom+improving diagnosis+increasing people's willingness to go to the hospital.

Most purchases I on reflection would prefer not to make are purchases where what I would receive would be worth much more than nothing but still less than the asking price, so I would never actually be compelled to throw out the superfluous stuff I buy.

Many times the purchase would even be worth more than the asking price, but I would like for my preferences to change such that it no longer would be the case.

If a bhikkhu monk can be content owning next to nothing, surely I can be happy owning less than I currently do. The question is how I change my preferences to become more like that of the monk.

Does anyone have advice on getting rid of material desire?

Unlike many I admire I seem to have a much larger desire to buy stuff I don't need. For example I currently feel an overpowering urge to spend $100 on a go board, despite the fact that I little need for one.

I'm not arguing that I have some duty to live frugally due to EA, I just would prefer to be a version of myself that doesn't feel the need to spend money on as much stupid stuff.

1Dave Cortright7mo
The underlying desire of most addictive tendencies in our production/consumption culture is the desire to feel more connected with a tribe (Maslow’s love and belonging). We are—at our core—social creatures. Our ancestors reinforced connections with tribe mates every day, and they clearly knew the values they shared with the tribe. They were living life within the parameters in which we evolved to thrive. In our society the tribes have been disbanded in favor of a more interconnected world, and likewise values have become diffuse and harder for individuals to know what they truly believe in. Just like throwing 20k chickens into a barn causes them to go crazy and peck one another to death because their brains can’t handle a pecking order that big, so too is it with humans who are not able to instinctively operate in such a vastly more complex and relationally fluid world where the environment has changed so radically from tribal days. Invest in a few (3-5) deeply intimate relationships where you know you are equals and can be there unconditionally and without judgment for each other. As Robin Dunbar says in his excellent book “Friends”: It was the social measures that most influenced your chances of surviving… The best predictors were those that contrasted high versus low frequencies of social support and those that measured how well integrated you were into your social network and your local community. Scoring high on these increased your chances of surviving by as much as 50 per cent… it is not too much of an exaggeration to say that you can eat as much as you like, drink as much alcohol as you want, slob about as much as you fancy, fail to do your exercises and live in as polluted an atmosphere as you can find, and you will barely notice the difference… You will certainly do yourself a favor by eating better, taking more exercise and popping the pills they give you, but you’ll do considerably better just by having some friends. Also see Robert Waldinger’s TED t
2Thomas Kwa7mo
If spending a bit of money is ok, you can implement the policy of throwing away things you don't need. Then after a few cycles of buy thing -> receive thing -> throw away thing you'll be deconditioned from buying useless things.

Thanks for this, especially for your point on hedging! If you want to convey your uncertainty, there is no shame in saying "I am not sure about this" before making your claim. 


On the topic of good forum writing, a few additional things I try to keep in mind when I write:

  • Most readers will only skim-read your post. Make sure to use descriptive headlines that make it easy for the reader to jump in and out and read only the parts that interest them.
  • Logically structure your writing as a pyramid. Present information as it is needed. Your reader shouldn't ha
... (read more)
3Austin7mo
I'd recommend structuring your code to not require jumping around either! E.g. group logic together in functions; put function and variable declarations close to where they are used; use the most local scope possible.

It would be great if that same space had an ability to gauge interest and allow people/organisations to post bounties for projects they would like see done.

Ie. Someone from FHI posts a request for a deep-dive on topic X and provides a bounty for whoever does it sufficiently well first. Someone from CSER realizes they also would like to know the answer and adds to the bounty as well. Upvotes instead of bounties could be another way to figure out which projects would be valuable to get done.

2PeterSlattery7mo
Agree. This is pretty aligned with my desire for community funding mechanisms.

tags.tag_types causing you trouble is likely the python namespace giving you issues.

Anyways, I put all of the code into a notebook to make it easier to reproduce. I hope this is close to what you had in mind. Haven't used these things much myself.

https://github.com/MperorM/ea-forum-analysis/blob/main/plots-notebook.ipynb

My guess was that an external document would reduce readership too much to justify. Nevertheless here is a notebook with this post's content and the code:
https://github.com/MperorM/ea-forum-analysis/blob/main/plots-notebook.ipynb

Great question, I took the categories from here:
https://forum.effectivealtruism.org/tags/all

I have just gone off the assumption that whoever categorised the tags on this page, made a good judgement call. I agree completely that particularly longtermist stuff might look like a smaller fraction than it actually is, due to it being split across multiple categories. That said there are posts which fit under multiple longtermist categories which you'd have to ensure is not double-counted.

Thanks for the feedback, will put the code into a notebook when I have time tomorrow, should not take many minutes.

Thanks for the positive feedback! As far as I know there isn't a way to embed plots in the EA Forum, is there something I missed?

1david_reinstein8mo
True, for Plotly I don’t think so. In general there are a few options, none of them perfect however. We’ve wrestled with this a bit. In future we may have a clean code + statistics + models + visualizations version hosted separately, and then use the forum post for narrative and nontechnical discussion.

I think that is more or less what I'm trying to say!

Think of security at a company. Asking a colleague to show their badge before you let them into the building can be seen as rude. But enforcing this principle is also incredibly important for keeping your premises secure. So many companies have attempted to develop a culture where this is not seen as a rude thing to do, but rather a collective effort to keep the company secure.

Similarly I would think it's positive if we develop some sort of way to say "hey this smells fishy" without it being viewed as a direct attack, but rather someone participating in the collective effort to catch fraud.

1acylhalide8mo
Thanks, this makes sense! I guess the difference though is if you check everyone - probability of any person being fraud is like <1%, nobody finds it offensive to be accused of being fraudulent with < 1% probability. Whereas if you check only a few people, you are saying that the probabilities are significant, and that they're higher for those people than for their peers. (People tend to look at their social status/standing in relation to their peers more than they do in a more absolute sense.) It might still be workable; just wanted to add some thoughts.

I wouldn't worry about it, nothing about your writing in particular. It's not something that caused me any real distress! I think the topic of catching fraud is inherently prone to causing imposter-syndrome, if you often go around feeling like a fraud. You get that vague sense of 'oh no they finally caught me' when you see a post on the topic specifically on the EA Forum.

A central problem is that accusing something of being fraudulant carries an immense cost as its hard to percieve as anything but a direct attack. Whoever did the fraud has every incentive to shut you down and very little to lose which gets very nasty very quickly.

Ideally there would be a low commitment way to accuse someone of fraud that avoids this. Normalising something akin to "This smells fishy to me" and encouraging a culture of not taking it too personally, whenever the hunch turns out wrong might be a first step towards a culture where fraud is caught more quickly.

as a side note, maaaan did this post trigger a strong feeling of imposter syndrome in me!

1acylhalide8mo
Calling someone a fraud is a direct attack - I'm not fully sure I understand what you mean when you say you want it to carry a smaller cost, or not be taken personally by the recipient. Are you saying something like the following should be okay: "I think you're a fraud with 30% probability, and would not like to not receive backlash while I investigate further and increase/decrease my confidence in the same." ?
-1[comment deleted]8mo
3Kelsey Piper8mo
ooooops, I'm sorry re: the imposter syndrome - do you have any more detail? I don't want to write in a way that causes that!

Great post! I don't have fully formed view of the consequences of EA's increasing focus on longtermism, but I do think it was important that we notice and discuss these trends.

I actually spent some of my last saturday categorising all EA-forum posts by their cause-area[1], and am planning on spending my next saturday making a few graphs over any trends I can spot on the forum.

The reason I wanted to do this is exactly because I had an inclination that global poverty posts were getting comparatively less engagement than it used to, and was wondering whether ... (read more)

4Nathan Young8mo
Superb work from you! You should get in touch with the person who runs this and put it on https://www.effectivealtruismdata.com/ [https://www.effectivealtruismdata.com/]

I completely agree with this actually. I think concerns over unilaterialist's curse is a great argument in favour of keeping funding central, at least for many areas. I also don't feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.

But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.

I think the unilateralist's curse can be avoided if we make sure to avoid hazardous domains of funding  for our experiements to evaluate other types of grantmaking.

Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening pr... (read more)

This is the high-impact opportunity I've been looking for my entire life! I've sold off all my stocks, my house and everything else I own, to maximize my donations to this project.

2tamgent8mo
Mmm I sense a short life thusfar. I posit that the shorter the life thusfar the more likely you are to feel this way. How high impact! Think of all the impact we can make on the impactable ones!
Load More