If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
Disagree because it is at -36.
Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.
That said: part of me feels that Effective Altruism shouldn't be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I'd have to have a concrete example in front of me to figure out how to balance these views.
I didn't vote, but maybe people are worried about the EA forum being filled up with a bunch of logistics questions?
This post makes some interesting points about EA's approach to philanthropy, but I certainly have mixed feelings on "please support at least one charity run by someone in the global south that just so happens to be my own".
Thank so much Chris. The heading, though, clearly said "Help me make some small stride on extreme poverty where I live"
Let me just say this: if you visited the project office of the UCF (in Kamuli), and see for yourself that even the people working at the UCF are also living in the exact same conditions of abject poverty that all other people in our region (whom we are aiming to move from poverty) are living in, you'd see why it isn't wrong at all to seek support for the work we are doing on extreme poverty.
We are simply trying to build a self-sustainable ...
Might be more useful if you explain why the arguments weren't persuasive to you
So my position is that most of your arguments are worth some "debate points" but that mitigating potential x-risks outweigh this.
Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world
I've personally made the mistake of thinking that the Overton Window is narrower than it actually was in the past. So even though such laws may not seem viable now, my strong expectation is that it will quickly chan...
I find the idea of a reverse burden of proof interesting, but tbh I wasn’t really persuaded by the rest of your arguments. I guess the easiest way to respond to most of them would be “Sure, but human extinction kind of outweighs it” and then you’d reraise how these risks are abstract/speculative and then I’d respond that putting risks in two boxes, speculative and non-speculative, hinders clear thinking more than it helps. Anyway, that’s just how I see the argument play out.
I̶n̶ ̶a̶n̶y̶ ̶c̶a̶s̶e̶ ̶m̶y̶ ̶m̶a̶i̶n̶ ̶w̶o̶r̶r̶y̶ ̶a̶b̶o̶u̶t̶ ̶s̶t̶r̶o̶n̶g̶ ̶l̶i̶a...
I have very mixed views on Richard Hannania.
On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).
On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more cr...
Yeah, it's possible I'm taking a narrow view of what a professional organisation is. I don't have a good sense of the landscape here.
I guess I'm a bit skeptical of this proposal.
I don't think we'd have much credibility as a professional organisation. We could require people to do the intro and perhaps even the advanced fellowship, but that's hardly rigorous training.
I'm worried that trying to market ourselves as a professional organisation might backfire if people end up seeing us as just a faux one.
I suspect that this kind of association might be more viable for specific cause areas than for EA as a whole, but there might not be enough people except in a couple of countries.
Thank you for posting this publicly. It's useful information for everyone to know.
Wasn't there some law firm that did an investigation? Plus some other projects listed here.
It would be useful for you to clarify exactly what you'd like to see happen and how this differs from the things that did happen, even though this might be obvious to someone who is high-context on the situation like you are. On the other hand, I'd have to do a bit of research to figure out what you're suggesting.
The post has a footnote, which reads:
Although EV conducted a narrow investigation, the scope was far more limited than what I’m describing here, primarily pertaining to EV’s legal exposure, and most results were not shared publicly.
As far as I know, what has been shared publicly from the investigation is that no one at EVF had actual knowledge of SBF's fraud.
I didn't know that CHAI or 80,000 Hours had recommended material.
The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.
I was there for an AI Safety workshop, I can't remember the content though. Do you know what you included?
I found that just open discussion sometimes leads to less valuable discussion, so in both cases I'd focus on a few specific discussion prompts / trying to help people come to a conclusion on some question
That's useful feedback. Maybe it'd be best to take some time at the end of the first session of the week to figure out what questions to discuss in the second session? This would also allow people to look things up before the discussion and take some time for reflection.
...I'd be keen to hear specifically what the pre-requisite knowledge is - just in order to
I'm quite tempted to create a course for conceptual AI alignment, especially since agent foundations has been removed from the latest version of the BlueDot Impact course[1].
If I did this, I would probably run it as follows:
a) Each week would have two sessions. One to discuss the readings and another for people to bounce their takes off others in the cohort. I expect that people trying to learn conceptual alignment would benefit from having extra time to discuss their ideas with informed participants.
b) The course would be less introductory, though without...
I think the biggest criticism that this cause will face from an EA perspective is that it's going to be pretty hard to argue for moving more talent to first-world countries to do random things than either convincing more medical, educational or business talent to move to developing countries to help them develop or to focus on bringing more talent to top cause areas. I'm not saying that such a case couldn't be made, just that I think it'd be tricky.
The upshot is: I recommend only choosing this career entry route if you are someone for whom working exclusively at EA organisations is incredibly high on your priority list.
I think taking a role like this early on could also be high-value if you're trying to determine whether working in a particular cause area is for you. Often it's useful to figure that out pretty early on. Of course, the fact that it isn't the exact same job as you might be doing later on might make it less valuable for this.
I'm perfectly fine with holding an opinion that goes against the consensus. Maybe I could have worded it a bit better though? Happy to listen to any feedback on this.
Sorry, I misread the definition of ex ante.
I agree that the post poses a challenge to the standard EA view.
I don't see "There are no massive differences in impact between individuals" as an accurate characterization of the claim the argument is showing.
"There are no massive ex ante differences in impact between individuals" would be a reasonable title. Or perhaps "no massive identifiable differences"?
I can see why this might seem like an annoying technicality. I still think it's important to be precise and rounding arguments off like this increases the chances that people talk past each other.
"Is that this is not true because for there to be massive differences ex ante we would (a) need to understand the impact of choices much better" - Sorry, that's a non-sequitur. The state of the world is different from our knowledge of it. The map is not the territory.
"X is false" and "We don't know whether X is true or false" are different statements.
It's fine to mention other factors too, but the claim (at least from the outline) seems to be that "it's hard to tell" rather than "there are no large differences in impact". Happy to be corrected if I'm wrong.
"I understand the post is claiming that in as much as it is possible to evaluate the impact of individuals or decisions, as long as you restrict to ones with positive impact the differences are small, because good actions tend to have credit that is massively shared." - There's a distinction between challenges with evaluating differences in impact and whether those impacts exist.
The other two arguments listed in the outline are: "Does this encourage elitism"? and a pragmatic argument that individualized impact calculations are not the best path of action.
None of these are the argument made in the title.
I gave this a downvote for the clickbait title which from the outline doesn't seem to match the actual argument. Apologies if this seems unfair, titles like this are standard in journalism, but I hope this doesn't become standard in EA as it might affect our epistemics. This is not a comment on the quality of the post itself.
Sorry to hear this. Unfortunately, AI Safety opportunities are very competitive.
You may want to develop your skills outside of the AI safety community and apply to AI Safety opportunities again further down the track when you're more competitive.
Happy to talk that through if you'd like, though I'm kind of biased, so probably better to speak to someone who doesn't have a horse in the race.
I don't know if this can be answered in full-generality.
I suppose it comes down to things like:
• Financial runway/back-up plans in case your prediction is wrong
• Importance of what you're doing now
• Potential for impact in AI safety
I would love to see attempts at either a community-building fellowship or a community-building podcast.
With the community-building podcast, I suspect that people would prefer something that covers topics relatively quickly as community builders are already pretty busy.
a) I suspect AI able to replace human labour will create such abundance that it will eliminate poverty (assuming that we don't then allow the human population to increase to the maximum carrying capacity).
b) The connection the other way around is less interesting. Obviously, AI requires capital, but once AI is able to self-reproduce then amount of capital required to kickstart economic development becomes minimal.
c) "I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect?" - How is it having an adverse effect?
Yep, that's the main one, but to a lesser extent Sora being ahead of schedule + realising what this means for AI agents.
It's less about my median timeline moving down, but more about the tail end not extending out as far.
I was previously very uncertain about this, but given the updates in the last week, I'm now feeling confident enough in my prediction of the future that I regret any money I put into my super (our equivalent of a pension).
Please do not interpret this comment as financial advice, rather just a statement of where I am at.
A few questions that you might find helpful for thinking this through:
• What are your AI timelines?
• Even if you think AI will arrive by X, perhaps you'll target a timeline of Y-Z years because you think you're unlikely to be able to make a contribution by X
• What agendas are you most optimistic about? Do you think none of these are promising and what we need are outside ideas? What skills would you require to work on these agendas?
• Are you likely to be the kind of person who creates their own agenda or contributes to someone else's?
• How enthusiastic are...
Do the intro fellowship completions only include EA Intro Fellowship, not people doing the AI Safety Fundamentals course?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is unethical to try to guide society without broader societal participation.
I suppose it depends on how much of an emergency you consider the current situation to be.
If you think it's truly a dire situation, I expect almost no-one would reason as follows: "Well, we're insufficiently diverse, it'd be immoral for us to do anything, we should just sit over here a...
If EA decided to pursue the politics and civil society route, I would suggest that it would likely make sense to follow a strategy similar to what the Good Ancestors Project has been following in Australia. This project has done a combination of a) outreach to policy-makers b) co-ordinating an open letter to the government c) making a formal submission to a government inquiry d) walking EA's through the process of making their own submissions (you'd have to check with Greg to see if he still thinks all of these activities are worthwhile).
Even though AI Pol...
If this ends up succeeding, then it may be worthwhile asking whether there are any other sub-areas of EA that might deserve their own forum, but I suppose that's more a question to ask in a few months.
To be honest, I don't really see these kinds of comments criticising young organisations that likely have access to limited amounts of funding to be helpful. I think there are some valid issues to be discussed, but I'd much rather see them discussed at an ecosystem level. Sure, it's less than ideal that low-paid internships provide an advantage to those from a particular class, but it's also easier for wealthier people to gain a college degree as well, I think it'd be a mistake for us to criticise universities for offering college degrees. At least with th...
I'm not going to fully answer this question, b/c I have other work I should be doing, but I'll toss in one argument. If different domains (cyber, bio, manipulation, ect.) have different offense-defense balances a sufficiently smart attacker will pick the domain with the worst balance. This recurses down further for at least some of these domains where they aren't just a single thing, but a broad collection of vaguely related things.
Oh, I can see why it is ambiguous. I meant whether it is easier to attack or defend, which is separate from the "power" attackers have and defenders have.
"What incentive is there to destroy the world, as opposed to take it over? If you destroy the world, aren't you sacrificing yourself at the same time?"
Some would be willing to do that if they can't take it over.
Your argument in objection 1 doesn't the position people who are worried about an absurd offense-defense imbalance.
Additionally: It may be that no agent can take over the world, but that an agent can destroy the world. Would someone build something like that? Sadly, I think the answer is yes.
Pretty terribly. We fell into in-fighting and everyone with an axe to grind came out to grind it.
We need to be able to better navigate such crises in the future.
Looks like outer alignment is actually more difficult than I thought. Sherjil Ozair, a former Deepmind employee writes:
"From my experience doing early RLHF work for Gemini, larger models exploit the reward model more. You need to constantly keep collecting more preferences and retraining reward models to make it not exploitable. Otherwise you get nonsensical responses which have exploited the idiosyncracy of your preferences data. There is a reason few labs have done RLHF successfully"
In other words, even though we look at things like ChatGPT and go, "Wow,...
For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:
... (read more)