All of MathiasKB's Comments + Replies

Snakebites kill 100,000 people every year, here's what you should know

Since writing this article, this is actually one of the things I've been looking into! I think it looks very promising, as many of the issues outlined by WHO seem downstream from people simply being unable to afford high quality antivenom. (ie. why do people choose local healers? Because hospitals cost more and don't help either!)

It also looks like the marginal cost of high quality antivenom would decrease up to an order of magnitude if you scale up production. I have yet to take an in depth look at synthetic antivenom production, but after briefly looking into it, it seems that we are not going to get synthetic antivenom just yet.

The biggest risk of free-spending EA is not optics or motivated cognition, but grift

Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power.


I largely agree with this, but I think it's important to keep in mind that "grifter" is not a binary trait. My biggest worry is not that people who are completely unaligned with EA would capture wealth and steer it into the void, but rather that of 10 EA's the one most prone to "grifting" would end up with more influence than the rest.

What makes this so difficult is that ... (read more)

if I were to spend a few weeks in Oxford mingling with people, arguing for the importance of EU policy, that would potentially do more to change people's minds than if I were to spend that time writing on the forum.

I also don't know whether this is true, but the general idea that talking to people in person individually would be more persuasive than over text isn't surprising. There's a lower barrier to ideas flowing, you can better see how the other person is responding, and you don't have consider how people not in the conversation might misinterpret you.

What are some high-EV but failed EA projects?

the longtermist entrepreneurship incubator still seems like a promising project to me, though difficult to execute.

Getting GPT-3 to predict Metaculus questions

man you just blew my mind, will give it a try next time I feel an urge to play around with GPT!

Getting GPT-3 to predict Metaculus questions

If the comments include a prediction my guess is that GPT would often make the same prediction and thus become much more accurate. Not because it learned to predict things but because there's probably a strong correlation between the community prediction and the most upvoted comments prediction.

If the goal is to give GPT more context than just the title of the question, then you could include the descriptions for each question as well, but when I tried this I got worse results (fewer legible predictions).

Is EA "just longtermism" now?

Open philanthropy is not the only grantmaker in the EA Space! If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made  a large shift towards longtermism, primarily due to the Future fund being so massive.

I also want to emphasize that many central EA Organisations are increasingly focused on longtermist concerns, and not as transparent about it as I would like for them to be. People and organisations should not pretend to care about things they do not for the sake of optics. One of EA's most central tenets i... (read more)

If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made  a large shift towards longtermism, primarily due to the Future fund being so massive.

I think starting in 2022 this will be true in aggregate – as you say largely because of the FTX Future Fund.

However, for EA Funds specifically, it might be worth keeping in mind that the Global Health and Development Fund has been the largest of the four funds by payout amount, and by received donations even is about as big as all other funds combined.

1Harrison Durland17d
In my view, there some defining tension in rationalist and EA thought regarding epistemic vs. instrumental emphasis on truth: adopting a mindset of rationality/honesty is probably a good mindset—especially to challenge biases and set community standards—but it’s ultimately for instrumental purposes (although, for instrumental purposes, it might be better to think of your mindset as one of honesty/rationality, recursivity problems aside). I don’t think there is much conflict at the level of “lie about what you support”: that’s obviously going to be bad overall. But there are valid questions at the level of “how straightforward/consistent should I be about the way all near-termist cause areas/effects pale in comparison to expected value from existential risk reduction?” It might be the case that it’s obvious that certain health and development causes fail to compare on a long-term scale, but that doesn’t mean heavily emphasizing that is necessarily a good idea, for community health and other reasons like you mention.
Snakebites kill 100,000 people every year, here's what you should know

They are currently explicitly writing on the page I linked that they are not.

Can I apply for funding to the Global Health and Development Fund?

The Global Health and Development Fund is not currently accepting applications for funding.

If that is not the case, I'm not too happy with their communication!

 

EDIT: whoops, didn't see Lorenzo's comment

[This comment is no longer endorsed by its author]Reply
Snakebites kill 100,000 people every year, here's what you should know

The page linked in my comment states that they are not currently accepting unsolicited proposals, but I agree the FAQ makes it sound like they are open to being contacted. My guess is there probably isn't a clear cut policy and that they just want to avoid setting an expectation that they will evaluate everything sent their way.

Will send them a message, thank you  :)

Snakebites kill 100,000 people every year, here's what you should know

When I first learned about the diagnostics startup, my immediate thought was that some EA Fund would be interested in further evaluating it.  Unfortunately neither Open Philanthropy, EA Funds, or FTX Community are currently accepting unsolicited proposals.

The primary reason I wrote this post was to get the attention of fund-managers, and hopefully get someone to figure out if this is impactful and fund it if it is.

2Peter Wildeford17d
EA Funds definitely accepts unsolicited proposals! That's the whole point of it!
3Jorgen_Ljones21d
Aren't OpenPhil? https://www.openphilanthropy.org/giving/how-to-apply-for-funding#Can_I_apply_for_a_grant [https://www.openphilanthropy.org/giving/how-to-apply-for-funding#Can_I_apply_for_a_grant] They specify that they have low expectations for unsolicited proposals, but it's possible to contact them about it.
Snakebites kill 100,000 people every year, here's what you should know

I wondered about this as well. There's no doubt that it would reduce snakebites, but whether it's cost-effective is more difficult to tell.

An analyst I spoke to pointed out to me that after all it's still pretty rare to be bitten by a snake. The amount of footwear you'd need to distribute per snakebite prevented is pretty high, and likely pretty expensive.

1Peter S. Park21d
That makes sense! Shoes are probably more expensive than malaria nets. But it might still be a better intervention point than antivenom+improving diagnosis+increasing people's willingness to go to the hospital.
MathiasKB's Shortform

Most purchases I on reflection would prefer not to make are purchases where what I would receive would be worth much more than nothing but still less than the asking price, so I would never actually be compelled to throw out the superfluous stuff I buy.

Many times the purchase would even be worth more than the asking price, but I would like for my preferences to change such that it no longer would be the case.

If a bhikkhu monk can be content owning next to nothing, surely I can be happy owning less than I currently do. The question is how I change my preferences to become more like that of the monk.

MathiasKB's Shortform

Does anyone have advice on getting rid of material desire?

Unlike many I admire I seem to have a much larger desire to buy stuff I don't need. For example I currently feel an overpowering urge to spend $100 on a go board, despite the fact that I little need for one.

I'm not arguing that I have some duty to live frugally due to EA, I just would prefer to be a version of myself that doesn't feel the need to spend money on as much stupid stuff.

1Dave Cortright1mo
The underlying desire of most addictive tendencies in our production/consumption culture is the desire to feel more connected with a tribe (Maslow’s love and belonging). We are—at our core—social creatures. Our ancestors reinforced connections with tribe mates every day, and they clearly knew the values they shared with the tribe. They were living life within the parameters in which we evolved to thrive. In our society the tribes have been disbanded in favor of a more interconnected world, and likewise values have become diffuse and harder for individuals to know what they truly believe in. Just like throwing 20k chickens into a barn causes them to go crazy and peck one another to death because their brains can’t handle a pecking order that big, so too is it with humans who are not able to instinctively operate in such a vastly more complex and relationally fluid world where the environment has changed so radically from tribal days. Invest in a few (3-5) deeply intimate relationships where you know you are equals and can be there unconditionally and without judgment for each other. As Robin Dunbar says in his excellent book “Friends”: It was the social measures that most influenced your chances of surviving… The best predictors were those that contrasted high versus low frequencies of social support and those that measured how well integrated you were into your social network and your local community. Scoring high on these increased your chances of surviving by as much as 50 per cent… it is not too much of an exaggeration to say that you can eat as much as you like, drink as much alcohol as you want, slob about as much as you fancy, fail to do your exercises and live in as polluted an atmosphere as you can find, and you will barely notice the difference… You will certainly do yourself a favor by eating better, taking more exercise and popping the pills they give you, but you’ll do considerably better just by having some friends. Also see Robert Waldinger’s TED t
2Thomas Kwa1mo
If spending a bit of money is ok, you can implement the policy of throwing away things you don't need. Then after a few cycles of buy thing -> receive thing -> throw away thing you'll be deconditioned from buying useless things.
Editing Advice for EA Forum Users

Thanks for this, especially for your point on hedging! If you want to convey your uncertainty, there is no shame in saying "I am not sure about this" before making your claim. 


On the topic of good forum writing, a few additional things I try to keep in mind when I write:

  • Most readers will only skim-read your post. Make sure to use descriptive headlines that make it easy for the reader to jump in and out and read only the parts that interest them.
  • Logically structure your writing as a pyramid. Present information as it is needed. Your reader shouldn't ha
... (read more)
3Austin1mo
I'd recommend structuring your code to not require jumping around either! E.g. group logic together in functions; put function and variable declarations close to where they are used; use the most local scope possible.
Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it?

It would be great if that same space had an ability to gauge interest and allow people/organisations to post bounties for projects they would like see done.

Ie. Someone from FHI posts a request for a deep-dive on topic X and provides a bounty for whoever does it sufficiently well first. Someone from CSER realizes they also would like to know the answer and adds to the bounty as well. Upvotes instead of bounties could be another way to figure out which projects would be valuable to get done.

2PeterSlattery1mo
Agree. This is pretty aligned with my desire for community funding mechanisms.
EA Forum's interest in cause-areas over time and other statistics

tags.tag_types causing you trouble is likely the python namespace giving you issues.

Anyways, I put all of the code into a notebook to make it easier to reproduce. I hope this is close to what you had in mind. Haven't used these things much myself.

https://github.com/MperorM/ea-forum-analysis/blob/main/plots-notebook.ipynb

EA Forum's interest in cause-areas over time and other statistics

My guess was that an external document would reduce readership too much to justify. Nevertheless here is a notebook with this post's content and the code:
https://github.com/MperorM/ea-forum-analysis/blob/main/plots-notebook.ipynb

EA Forum's interest in cause-areas over time and other statistics

Great question, I took the categories from here:
https://forum.effectivealtruism.org/tags/all

I have just gone off the assumption that whoever categorised the tags on this page, made a good judgement call. I agree completely that particularly longtermist stuff might look like a smaller fraction than it actually is, due to it being split across multiple categories. That said there are posts which fit under multiple longtermist categories which you'd have to ensure is not double-counted.

Thanks for the feedback, will put the code into a notebook when I have time tomorrow, should not take many minutes.

EA Forum's interest in cause-areas over time and other statistics

Thanks for the positive feedback! As far as I know there isn't a way to embed plots in the EA Forum, is there something I missed?

1david_reinstein1mo
True, for Plotly I don’t think so. In general there are a few options, none of them perfect however. We’ve wrestled with this a bit. In future we may have a clean code + statistics + models + visualizations version hosted separately, and then use the forum post for narrative and nontechnical discussion.
Liars

I think that is more or less what I'm trying to say!

Think of security at a company. Asking a colleague to show their badge before you let them into the building can be seen as rude. But enforcing this principle is also incredibly important for keeping your premises secure. So many companies have attempted to develop a culture where this is not seen as a rude thing to do, but rather a collective effort to keep the company secure.

Similarly I would think it's positive if we develop some sort of way to say "hey this smells fishy" without it being viewed as a direct attack, but rather someone participating in the collective effort to catch fraud.

1acylhalide1mo
Thanks, this makes sense! I guess the difference though is if you check everyone - probability of any person being fraud is like <1%, nobody finds it offensive to be accused of being fraudulent with < 1% probability. Whereas if you check only a few people, you are saying that the probabilities are significant, and that they're higher for those people than for their peers. (People tend to look at their social status/standing in relation to their peers more than they do in a more absolute sense.) It might still be workable; just wanted to add some thoughts.
Liars

I wouldn't worry about it, nothing about your writing in particular. It's not something that caused me any real distress! I think the topic of catching fraud is inherently prone to causing imposter-syndrome, if you often go around feeling like a fraud. You get that vague sense of 'oh no they finally caught me' when you see a post on the topic specifically on the EA Forum.

Liars

A central problem is that accusing something of being fraudulant carries an immense cost as its hard to percieve as anything but a direct attack. Whoever did the fraud has every incentive to shut you down and very little to lose which gets very nasty very quickly.

Ideally there would be a low commitment way to accuse someone of fraud that avoids this. Normalising something akin to "This smells fishy to me" and encouraging a culture of not taking it too personally, whenever the hunch turns out wrong might be a first step towards a culture where fraud is caught more quickly.

as a side note, maaaan did this post trigger a strong feeling of imposter syndrome in me!

1acylhalide1mo
Calling someone a fraud is a direct attack - I'm not fully sure I understand what you mean when you say you want it to carry a smaller cost, or not be taken personally by the recipient. Are you saying something like the following should be okay: "I think you're a fraud with 30% probability, and would not like to not receive backlash while I investigate further and increase/decrease my confidence in the same." ?
-1[comment deleted]1mo
3Kelsey Piper1mo
ooooops, I'm sorry re: the imposter syndrome - do you have any more detail? I don't want to write in a way that causes that!
EA and Global Poverty. Let's Gather Evidence

Great post! I don't have fully formed view of the consequences of EA's increasing focus on longtermism, but I do think it was important that we notice and discuss these trends.

I actually spent some of my last saturday categorising all EA-forum posts by their cause-area[1], and am planning on spending my next saturday making a few graphs over any trends I can spot on the forum.

The reason I wanted to do this is exactly because I had an inclination that global poverty posts were getting comparatively less engagement than it used to, and was wondering whether ... (read more)

4Nathan Young1mo
Superb work from you! You should get in touch with the person who runs this and put it on https://www.effectivealtruismdata.com/ [https://www.effectivealtruismdata.com/]
Issues with centralised grantmaking

I completely agree with this actually. I think concerns over unilaterialist's curse is a great argument in favour of keeping funding central, at least for many areas. I also don't feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.

But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.

I think the unilateralist's curse can be avoided if we make sure to avoid hazardous domains of funding  for our experiements to evaluate other types of grantmaking.

Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening pr... (read more)

The case for infant outreach

This is the high-impact opportunity I've been looking for my entire life! I've sold off all my stocks, my house and everything else I own, to maximize my donations to this project.

2tamgent2mo
Mmm I sense a short life thusfar. I posit that the shorter the life thusfar the more likely you are to feel this way. How high impact! Think of all the impact we can make on the impactable ones!
Announcing What We Owe The Future

Looking forward to it! Will it be on Audible?

Yes, it will once launched! (Will is doing the audiobook) 

EA startup - non-profit sustainable marketplace

I don't think the EA community is uniquely suited to answer your question. Whether this is a great startup idea or not is difficult for me to figure out and I think speaking to people in the startup and venture-capital community will get you better answers.

I think your counterfactual will likely be much higher if you do direct work, by starting a new EA charity or an effective non-profit than it would be by earning to give. Consider spending some time figuring out exactly what direct work would imply. 80K might be willing to provide advice on that as well.... (read more)

3Vincent van der Holst2mo
Hi Mathias, Sorry for missing this (have turned email notifications on my post on now). I agree that the EA community is not suited to evaluate the idea. VC's are, but they seek an ROI so they might like the idea so won't fund it. Based on the talks we had so far I'm pretty sure this idea would receive seed funding if it could provide ROI. Over generalizing here, but the EA community or wealthy EA's can fund it, but not evaluate it. VC's can evaluate it, but can't fund it. Is it a thought to see if we can combine the two somehow? EA works with investors that can evaluate profit for non-profit ideas and if they think they have potential EA can fund it? I have researched counterfactual (spoke to 80K and Charity Entrepeneurship) and I do not believe it's much higher (or higher at all) for me. The odds of success of this endeavor are low, but the potential profits we can donate to effective charities are huge. P.S. great that you like the blog post and sharing will help us!
I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

Thank you for asking this question on the forum! 

It has been somewhat frustrating to follow you on Facebook and seeing all these great people you were about to interview, without being able to contribute with anything.

Pre-announcing a contest for critiques and red teaming

Genius idea to red-team (through comments that can provide thoughtful input) the red-team contest itself!

Butterfly Ideas

It’s like showing a beautiful, fragile butterfly to your friend to demonstrate the power of flight, only to have them grab it and crush it in their hands, then point to the mangled corpse as proof butterflies not only don’t fly, but can’t fly, look how busted their wings are.


Beautiful analogy, really made the tragedy feel all to real! 

Will add 'butterfly idea' to my vocabulary, I hope others do the same.

Meditations on careers in AI Safety

What exactly would the postdoc be about? Are you and others reasonably confident your research agenda would contribute to the field?

3PabloAMC2mo
I submitted an application about using causality as a means for improved value learning and interpretability of NN: https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation [https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation] My main reason for putting forward this proposal is that I believe the models of the world humans operate, are somewhat similar to causal models, with some high-level variables that AI systems might be able to learn. So using causal models might be useful for AI Safety. I think there are also some external reasons why it makes sense as a proposal: * It is connected to the work of https://causalincentives.com/ [https://causalincentives.com/] * Most negative feedback I have received is because the proposal is still a bit too high level, and most people believe this is something worth trying out (even if I am not the right person). * I got approval from LTFF, and got to the second round of both FLI and OpenPhil (still undecided in both cases, so no rejections). I think the risk of me not being the right person to carry out research on this topic is greater than the risk of this not being a useful research agenda. On the other hand, so far I have been able to do research well even when working independently, so perhaps the change of topic will turn out ok.
Brain preservation to prevent involuntary death: a possible cause area

Thanks for this post! It was great to read and learn about a topic about which I know nothing.

My primary reservation, which I'd be curious to get your thoughts on is something like this:

It seems to me that in the abstract there is a finite amount of space available for humans on the planet. Whether that space is taken up by me or some other human being is not too important to me. Similarly to life-extension research it seems to me that brain preservation is spending resources so that people who currently occupy the planet will do so for longer, at the expe... (read more)

3AndyMcKenzie2mo
Thanks for the kind feedback! The main counter-argument to the idea that there is limited space is that in the future, if humanity ever progresses to the point that revival is possible, then we will almost certainly not have the same space constraints we do now. For example, this may be because of whole brain emulation and/or because we have become a multi-planetary species. Many people, myself included, think that there is a high likelihood this will happen in the next century or sooner: https://www.cold-takes.com/most-important-century/ [https://www.cold-takes.com/most-important-century/] There is also an argument that we actually do not have limited space or resources on the planet now. For example, this was explained by Julian Simon: https://en.wikipedia.org/wiki/The_Ultimate_Resource. [https://en.wikipedia.org/wiki/The_Ultimate_Resource.] But that is a little bit more controversial and not necessary to posit for the sake of counter-argument, in my opinion. A related question is: what is the point of (a) extending an existing's person's life when you could just (b) create a new person instead? I think (a) is much better than (b), because I what I described as "the psychological and relational harms caused by involuntary death" in the post. But others might disagree; it depends on whether they think that humans are replaceable or not. There is also a discussion about this on r/slatestarcodex that you might be interested in: https://www.reddit.com/r/slatestarcodex/comments/tk2krv/brain_preservation_to_prevent_involuntary_death_a/i1o2s1d/
A Landscape Analysis of Institutional Improvement Opportunities

Can you comment on why you chose not to release the quantitative model and calculations that you used to derive these conclusions? As detailed as this work is, I don't feel comfortable updating my views based on calculations I can't see for myself.

As you point out in the post, I imagine there is huge variability based on various guesstimates (as there should be!). To me at least, the most valuable part of this work lies in the model and the better understanding we get from attempting to make models, rather than the conclusions of the model.

3Nathan Young2mo
Is there any way you would have felt more comfortable releasing the model? Say if the EAIF supported it, or if it was visible to logged in users only?

Mathias, I'm happy to share the full spreadsheet with you or anyone else on request -- just PM me for the link. In addition, anyone can see the basic structure of the model as well as two examples of working through it in our previous piece introducing the framework we used.

Making the model public opens up the possibility of it being shared without the context offered in this article, and I'm hesitant to do that before we have an opportunity to document it much more fully. My hope is that we'll be able to do that for the next iteration.

EDIT: I've decided t... (read more)

Milan Griffes on EA blindspots

It doesn't feel contradictory to me, but I think I see where you're coming from. I hold the following two beliefs which may seem contradictory :

1. Many of the aforementioned blindspots seem like nonsense, and I would be surprised if extensive research in any would produce much of value.
2. At large, people should form and act on their own beliefs rather than differing to what is accepted by some authority.

There's an endless number of things which could turn out to be important. All else equal, EA's should prioritise researching the things which seem the mos... (read more)

Milan Griffes on EA blindspots

To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.

I get a sense of dejavu reading this criticism as I  feel I've sixteen variants of this over the years of how EA has psychological problem this, deep nietzschean struggle that and fails to value <author's pet interest>

If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it an... (read more)

To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.

...

If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned... I would even wager that if someone wrote a convincing case for why we should be 'taking dharma seriously', then many would start taking it seriously.

These two bits seem fairly c... (read more)

It would be extremely surprising if all of them were being given the correct amount of attention. (For a start, #10 is vanilla and highly plausible, and while I've heard it before, I've never given it proper attention. #5 should worry us a lot.) Even highly liquid markets don't manage to price everything right all the time, when it comes to weird things.

What would the source of EA's perfect efficiency be? The grantmakers (who openly say that they have a sorta tenuous grasp on impact even in concrete domains)? The perfectly independent reasoning of each EA,... (read more)

Concerns with the Wellbeing of Future Generations Bill

‘Many’ was carefully chosen not to imply more than half - perhaps very substantially less, depending on the metric. I started this analysis by assuming a base rate of less than half, but then updated my view based on granular analysis of the proposed Bill language and what we have been able to learn about the theory of change.

Then it seems to me that it would have been just as accurate to say: 'many efforts to improve policy in the last 50 years have succeeded' and conclude the opposite.

2John_Myers2mo
Thank you for your comment. I agree we could have said ‘many efforts to improve policy in the last 50 years have succeeded’. However, given our substantive analysis of the Bill, I think we would have ended up with the same concerns about its potential outcomes. In view of the impression that some people who do not work in policy or government seem to have that attempts to improve policy generally or always move things in the intended direction, we thought it helpful to highlight the risk of unintended consequences. The alternative formulation would not have made that point as clearly.
Brief Thoughts on "Justice Creep" and Effective Altruism

I think I disagree with the core premise that animal welfare is an odd one out. That animals have moral worth is a much smaller buy, than the beliefs needed to accept longtermism.

For reference, I think the strongest case for longtermism comes when you accept claims that humanity has a non-zero chance of colonising the universe with digital human beings. It makes perfect sense for me that someone would accept that animals have a high moral worth, but not the far future stuff.

I don't think a justice explanation predicts better why EA's care about animals, than the object level arguments for caring about them do.

3Devin Kalish2mo
I mostly agree, I don’t think I was super clear with my initial post, and have edited to try to clarify more what I mean by the “odd one out”. To respond to your point more specifically, I also agree that the reason for caring in the first place is just the strong arguments in favor of caring about non-humans, and I even agree that the formal arguments for caring about non-human animals are probably more philosophically robust that those for caring about future generations (at least in the “theory X” no-difference-made-by-identity way longtermists usually do). I think the reason the cause area is the odd one out on the EA formal arguments side is different from the reason it is the odd one out when describing EA to outsiders, to be clear, I just think that when an outsider finds the cause area weird on the list, it becomes hard to respond if the formal arguments are also less well developed for which dimension factory farming dominates the other three areas on. I hope this clarifies my position somewhat.
Apply Now | EA for Christians Annual Conference | 23 April 2022

A bit tangential, but the video introduction to effective altruism for Christians is really fantastic!

7Pablo2mo
I agree—excellent work.
We're announcing a $100,000 blog prize

 I'm curious to hear how you came to settle on the prize-structure of the contest.

For example, why few but large prizes, as opposed to a pay-per-post model?
What thoughts have you given to the opportunity cost of people you encourage to write blogs? 

Introducing 80k After Hours

Happy you finally settled on a name ;)

9Keiran_Harris3mo
What a wonderful day it was when we found a name most people didn't hate! We talk about this a bit at the end of the introduction episode :) (Thanks for your help!)
What brand should EA buy? If we had to buy one.

Whether it would work aside, I don't think this is a very ethical thing to do. It is fundamentally an attempt at deceit, which is to me is the antithesis of what EA is all about.

 

edit: I think I misunderstood the idea. I read it as an attempt at hijacking a journal to use it as a platform to publish ea research. If it's just buying a journal,and placing a higher emphasis on impactful research I take back my original comment. That said I think there's a very fine line between the former and the latter.

Argument Against Impact: EU Is Not an AI Superpower

Interestingly both EU, China, and the US are taking large steps to become less reliant on each other for semiconductor production. My guess is that, however strong an argument this is currently, it will be less so in the future.

I wonder to what extent governments will be successful in this effort. Are semiconductor companies easy to replicate or should is it a field where we should expect the current leaders such as ASML (Dutch) to keep lead even after governments pour in money?

Flipping the roles of animals and humans didn't feel particularly clever to me. Who  is going to be convinced by this video, who isn't already convinced?

It also focuses entirely on the suffering of wild animals at the hands of humans, which to my knowledge pales in comparison to what we do to farm animals.

I personally find earthlings to be the best video on speciesism. I went from eating meat to not ever wanting to touch meat again in the span of an hour.

What are some artworks relevant to EA?

To learn illustrator, I created a few posters for EA Denmark:

Global poverty: https://i.imgur.com/SMEUmUE.jpg

Animal welfare: https://i.imgur.com/9aXYVNw.jpg

Existential risk: https://i.imgur.com/6sLdojT.jpg

Longtermism: https://i.imgur.com/PS4Ap8J.jpg

I didn't pay to acquire the rights for the assets I burrowed so they just hang in my room :)

2Lizka4mo
Thank you for sharing these! (A great way to learn Illustrator, I'd bet.)
EU's importance for AI governance is conditional on AI trajectories - a case study

I completely agree that the classification of trajectories should be much more nuanced. I don't think a fast take-off implies these things either. The reason I bundle them together is to create to very distinct scenarios, making for a simpler analysis.

A more thorough analysis would separate these dynamics and analyse all the possible combinations.  It would also be better to evaluate EU's institutions separately and analyse more than a single lever influence.

0acylhalide4mo
Makes sense!
Response to Recent Criticisms of Longtermism

Makes sense, I agree with that sentiment.

Response to Recent Criticisms of Longtermism

Let me know if I misunderstood something or am reading your post uncharitably, but to me this really looks like an attempt at hiding away opinions perceived harmful. I find this line of thinking extremely worrying.

EA should never attempt to hide criticism of itself. I am very much a longtermist and did not think highly of Torres article, but if people read it and think poorly of longtermism then that's fine.

Thinking that hiding criticism can be justifiable because of the enormous stakes, is the exact logic Torres is criticising in the first place!

Framing my proposal as "hiding criticism" is perhaps unduly emotive here. I think that it makes sense to be careful and purposive about what types of content you broadcast to a wider audience which is unlikely to do further research or read particularly critically. I agree with Aaron's comment further down the page where he says that the effect of Torres's piece is to make people feel "icky" about longtermism. Therefore to achieve the ends which I take as implicit in evelynciara's comment (counteract some of the effects of Torres's article and produce a pi... (read more)

Flimsy Pet Theories, Enormous Initiatives

Based on you description of the documentary, I wonder to what extent Gates' explanations reflect his actual reasoning. He seems very cautious and filtered, and I doubt an explanation of a boring cost-benefit analysis would make for a good documentary.

Not that I think there necessarily was a good cost-benefit analysis, just that I wouldn't conclude much either way from the documentary.

2Hauke Hillebrandt5mo
Good point- but it's impossible to know if there are hidden reasons for his behavior. However, I find my theory more plausible: he didn't think much about social impact initially, made a lot of money at Microsoft, then turned towards philanthropy, and then selected a few cause areas (US education, global health, and later clean energy), partially based on cost-effectiveness grounds (being surprised that global health is so much more effective than US healthcare), but it seems unlikely that he systematically commissioned extensive cause prioritization work OpenPhil style and then after lengthy deliberation came down on global health being a robustly good buy that is 'increasingly hard to beat'.
EU AI Act now has a section on general purpose AI systems

My own opinion is that it is a double-edged sword

The Council's change on its own weakens the act, and will allow companies to avoid conformity assessments for the exactly the AI systems that will need them the most.

But the new article also makes it possible to impose requirements that will only solely affect general purpose systems, without burdening the development of all other low-risk AI with unnecessary requirements.

What is the EU AI Act and why should you care about it?

Thank you for spotting this  that mistake. This is the position I meant to link to, I've replaced the link in the post.

Load More