Thanks for sharing about your initiative. I do have some significant doubts about this project.
Have you interviewed charities and asked them whether they prefer donations through your scheme vs donations made directly to them?
Is there a chance that this project has negative impact, by cannibalizing direct donations and turning them into indirect donations via your platform - potentially against the will of the charities themselves, i.e., against their judgement that they could have more impact with direct donations?
Or alternatively, looking at the opposite...
Related: There is EA the actual movement, and EA the philosophy. I wonder how much we are losing out on by not having a clear line between the two. Maybe internally this distinction can be carefully navigated, but to an outsider it is one and the same. I wonder if that might be one of the things that could be improved about EA.
I imagine it feels challenging to share that and I applaud you for that.
While my EA experiences have been much more positive than yours, I do not doubt your account. For many of the points you mention, I can see milder versions in my own experience. I believe your post points towards something important.
Not if this just destroys momentum towards sustainable funding for AI safety and other longtermist causes.
Downvoted for several reasons: because I would expect colleagues in any work environment to hook up, because I think it's very unkind to assume sexual relations in the workplace are indicative of a problem, because I'm against outing people's sex lifes unless directly relevant to a scandal. And finally, because it seems unnecessary to mention polyamory when talking about two people hooking up.
(Retracted after more consideration. I still disagree with the wording of the comment I responded to but can now see it points towards a real problem)
This is nonsense. Financial firms typically have strict disclosure rules about relationships between colleagues because ppl will commit fraud out of loyalty to ppl they're fucking. As, y'know, may well have happened here!
Strongly disagree-voted because "I wish they had sat down" doesn't address the publicly stated reason why Binance pulled out. It makes it seem like they had no good reason, and a good conversation would have fixed the issues. Without knowing much, this seems implausible to me.
Also, I consider "I don't think he did anything in bad faith" to be somewhat irresponsible. If SBF actually did something wrong, then EAs going around and supporting him by saying "I don't think he did anything wrong" will hurt the optics of this further.
Curious why this is getting downvoted. It seems like another initiative in the Applied Rationality space, which sounds quite useful to me.
While I'm personally not interested in the bootcamp, I am curious if the people who downvoted have specific criticism or reservations about the program.
Location: Graz, Austria
Remote: Yes
Willing to relocate: For the right opportunity
Skills:
I frequently catch myself, and I'm embarrassed to admit that, being more likely to upvote posts of users that I know. I also find myself anchoring my vote to the existing vote count (if a post has a lot of upvotes then I am less likely to downvote it). Pretty sure I'm not the only one.
Furthermore, I observe how vote count influences my reading of each post more than it should. Groupthink at its best.
I suspect if the forum hid the vote count for a month, there would be significant changes in voting patterns. That being said, I'm not sure these changes would actually influence the votesorted order of the postings - but they might. I suspect it would also change the nature of certain discussions.
In order to make this even remotely plausible, the rules for tax deductible charities would need to be far more stringent. And then you get a situation like we currently have in Austria, where not a single EA-aligned charity is tax-deductible at all.
Nevertheless it does send a certain signal to the public. The way things look is important, especially when it comes to completely legal ways to circumvent taxes - where intent plays a role.
The justification of crypto regulation requires background information that outside observers don't have. Also, it's impossible to judge from the outside whether or not tax savings was one of the arguments considered in addition to the regulatory situation.
There is no extreme poverty or starvation in democratic countries
This seems like a strong claim to me. What's your source for that?
and access to education and health care is one hundred percent, at least in older democracies. Younger ones are getting there fast.
Where do you draw the line between older and younger democracies? Isn't the US pretty old compared to other democracies [1] - and does it provide "100% access to health care" to its citizens?
...all countries and all people lived in democracies the major problems of humanity would be solved or
pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem
While there is significant support for "AI as cause area #1", I know plenty of EAs that do not agree with this. Therefore, "generally agree upon" feels like a bit too strong of a wording to me. See also my post on why EAs are skeptical about AI safety
For viewpoints from professional AI researchers, see Vael Gates interviews with AI researchers on AGI risk.
I mention those pieces not to argue that AI risk is overblown, but rather to shed...
I found myself confused about the quotes, and would have liked to hear a bit more where they came from. Are these verbatim quotes from disillusioned EAs you talked to? Or are they rough reproductions? Or completely made up?
The sample is biased in many ways: Because of the places where I recruited, interviews that didn't work out because of timezone difference, people who responded too late, etc. I also started recruiting on Reddit and then dropped that in favour of Facebook.
So this should not be used as a representative sample, rather it's an attempt to get a wide variety of arguments.
I did interview some people who are worried about alignment but don't think current approaches are tractable. And quite a few people who are worried about alignment but don't think it should ge...
I'm not quite sure I read the first two paragraphs correctly. Are you saying that Cotra, Carlsmith and Bostrom are the best resources but they are not widely recommended? And people mostly read short posts, like those by Eliezer, and those are accessible but might not have the right angle for skeptics?
I'm a software entrepreneur transitioning into higher-impact ventures. Had a mentoring call with Yonatan a couple of months ago. What I really liked about his approach was the structure of the call: First, gathering an overview of the issues of the table. Second, going through them in a fast-forward kind of way. And third, figuring out which ones are the most important to talk about.
The outcomes of the call directly led into the next steps I needed to explore this path.
The fact that he asked for feedback at the end of the call shows me that Yonatan is seri...
The German-speaking EA meetups I know are all very happy to switch to English whenever non-German-speakers are present. Can't imagine it would be a problem anywhere in the German-speaking world!
Would you say that inexperienced people benefit less from a Mastermind than experienced people? Or would you say that they benefit so little that a Mastermind is not worth for them?
If your claim is that Masterminds are only worthwhile for experienced people, then I disagree for two reasons:
First, the way I see Masterminds, one core aspect is that a group of peers can be much more effective in thinking through problems than a single individual. This is true even if none of my peers have any experience that I don't have. It is surely not true for any imagina...
I believe a documentary could be a great vehicle to explain EA and get people interested.
Obviously it would need to explain EA principles. But there is also room to include emotion and personal stories. Which might be much more important, in terms of the effect on the viewer.
Perhaps the emotion and personal stories could make up more than half of the film. One documentary that does this really well is "Chasing Ice". It's about James Balog, a photographer documenting climate change by filming glaciers. The film presents the science in a clear way, but it's...
Not OP, but I'm guessing it's at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good "by default", so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don't know enough about the org to judge.
What does your definition of "offsetting" include? Only projects that reduce CO2 in a very direct way (e.g. building clean power plants)?
Or would you include political advocacy and research? If so, check out the work of Founders Pledge:
According to Founders Pledge estimates, the CO2 savings from donating 100 USD (maybe 1 ton per USD, with high uncertainty) will greatly exceed the emissions from your flight (which might be on the order of magnitude of 1 ton) [1]. Donating USD 100 to Atmosfair, while less effective, would also offset this flight [2]. If you include the value of your time, the cost of the train trip might be far, far higher.
Plane emissions are further complicated, if you live in the EU, by emission certificates - which might cause a counterfactual CO2 saving, when deciding ...
A new report from Founders Pledge just came out - although it's just an overview article and doesn't go into much depth. https://founderspledge.com/stories/changing-landscape
Relevant work demonstrating bright full-room lighting as an alternative to an SAD lamp: https://forum.effectivealtruism.org/posts/bwhDhZQvbEcG4FEb8/preprint-is-out-100-000-lumens-to-treat-seasonal-affective
Yes, that's what it is. "We" as in "the author and the reader". There is no co-author or organization involved in this.
I like your post because it puts some more backstory behind an argument that many people usually accept for face value.
I don't quite understand this argument:
If we can geoengineer or capture enough to offset 60% of our emissions in 2030, and then in 2031 we reduce our emissions by 1% (as measured at the smokestack), then the environmental damage will not fall from 40% to 39.6%, it will fall to 39%. So it's still a one-percentage-point change whether or not we do geoengineering and carbon dioxide removal.
This assumes that geoengineering will cause effec...
The most interesting part of your post, to me, is your risk model [1]. I would be curious to hear some more feedback from other people on it.
I turned it into a Guesstimate, making some adjustments to some of the numbers and using population figures from Austria [2].
[1] https://docs.google.com/document/d/1A0jcxj4n0BvNt_jMunHT5WSsAKFzuVJJyaaqcK9Z1HU/edit#
Thanks for the explanation on extreme individual precautions, that made things clearer.
I'm curious what you're thinking of when you say "adopt measures that can plausibly be sustained for one year or even longer"?
I'm thinking of simple, low-cost changes to habits and my living environment that reduce chances of infection with Coronavirus and other illnesses. For example: improving personal hygiene practices (how to handle laundry, when to desinfect hands, how to keep the kitchen super clean, desinfecting electronic devices), chan...
You mention exponential spread, working from home, and avoiding travelling after March.
But what is the endgame here? How long do we need to stop travelling for? Should we apply these measures, as far as possible, starting in April and keep them up until a vaccine is available in 1-2 years? Will the number of cases level off eventually?
I assume there is no scientific consensus on these questions. If the virus is here to stay, then there might be little value in adopting extreme individual precautions for just one or two months. Afterwards, when you stop tak...
I know next to nothing about this stuff, but I was thinking that it would be good to at least avoid the virus in the period when there might not be enough hospital beds and the health system is very overwhelmed. So it might make sense to take more extreme precautions in that time.
I would think hard about what the relevant resources are that you're trading off against each other. Are your hobbies important for your well-being and relaxation? Is it possible that by starting to monetize your hobbies, you might get less enjoyment out of them? Maybe it will also create some imbalance as you spend more time on them than you otherwise would? Or perhaps it's the opposite and monetizing your hobbies would actually increase the quality of your leisure time? Perhaps you can run a time-limited experiment to find out.
Also, as a full-t...
Related: this quote from the FAQ on their website
"They will spend your money directly" - seems like a strong statement, makes it sounds like charities usually do not invest any of the money they receive. Is that true? I don't know, just flagging this for further discussion.