titotal

Computational Physicist
7734 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
660

Welcome to the forum. You are not missing anything: in fact you have hit upon some of the most important and controversial questions about the EA movement, and there is wide disagreement on many of them, both within EA and with EA's various critics. I can try and give both internal and external sources asking or rebutting similar questions. 

In regards to the issue of unintended consequences from global aid, and the global vs local issue. this was an issue raised by Leif Wenar in a hostile critique of EA here. You can read some responses and rebuttals to this piece here and here

With regards to the merits of Longtermism, this will be a theme of the debate week this coming week, so you should be able to get a feel for the debate within EA there. Plenty of EA's are not longtermist for exactly the reasons you described. Longtermism the focus of a lot of external critique of EA as well, with some seeing it as a dangerous ideology, although that author has themselves been exposed for dishonest behaviour. 

AI safety is a highly speculative subject, and their are a wide variety of views on how powerful AI can be, how soon "AGI" could arrive, how dangerous it is likely to be, and what the best strategy is for dealing with it. To get a feel for the viewpoints, you could try searching for "P doom", which is a rough estimate for the chance of destruction. I might as well plug my own argument for why I don't think it's that likely. For external critics, pivot to AI is a newsletter that compiles articles with the perspective that AI is overhyped and that AI safety isn't real. 

The case for "earning to give" is given in detail here. The argument you raise of working for unethical companies is one of the most common objections to the practice, particularly in the wake of the SBF scandal, however in general EA discourages ETG with jobs that are directly harmful. 

Again, I'm not sure exactly how to respond to comments like this. Like, yeah, if AI could reliably do everything a top researcher does, it could enable a lot of breakthroughs. But I don't believe that an AI will be able to do that anytime soon. All I can say is that there is a massive gap between current AI capabilities and what they would need to fully automate a material science job. 30 years sounds like a long time, but AI winters have lasted that long before: there's no guarantee that because AI has rapidly advanced recently that it will not stall out at some point. 

I will say that I just disagree that an AI could suddenly go from "no major effect on research productivity" to "automate everything" in the span of a few years. The scale of difficulty of the latter compared to the former is just too massive, and in all new technologies it takes a lot of time to experiment and figure out how to use it effectively. Ai researchers have done a lot of work to figure out how to optimise and get good at the current paradigm: but by definition, the next paradigm will be different, and will require different things to optimize.

Hey, thanks for weighing in, those seem like interesting papers and I'll give them a read through. 

To be clear, I have very little experience in quantum computing, and haven't looked into it that much and so I don't feel qualified to comment on it myself (hence why this was just an aside there). All I am doing is relaying the views of prominent professors in my field, who feel very strongly that it is overhyped and were willing to say so in the panel, although I do not recall them giving much detail on why they felt that way. This matches with the general views I've had with other physicists in casual conversations. If I had to guess the source of these views, I'd say it was skepticism of the ability to actually build such large scale fault-tolerant systems.

Obviously this is not strong evidence and should not be taken as such. 

titotal
10
2
0
1
1

From my (small) experience in climate activist groups, I think this is an excellent article. 

Some other points in favour:  

Organising for small, early wins allows your organisation to gain experience with how to win, and what to do with said wins. A localised climate campaign will help you understand which messages resonate with people and which are duds, and familiarise yourself with how to deal with media, government, etc. 

It's also helps to scale with your numbers: a few hundred people aren't going to be enough to stop billion dollar juggernauts, but they can cause local councils to feel the heat. 

One counterpoint: you shouldn't be so unambitious that people feel like you're wasting their time. If just stop oil had started with a campaign to put flower gardens outside public libraries, they wouldn't have attracted the committed activist base they needed. 

If you look at the previous threads you posted, you'll see I was a strong defender of giving your project a chance. I think grassroots outreach and support in areas like yours is a very good thing, and I'm glad to see you transparently report on your progress with the project. 

That being said, I have to agree with the others here that investing in crypto coins like the one you mentioned is generally a bad idea. I have not heard of either of the people you claim are backing the project. The statement that "most people believe Jelly will soon be the new tiktok in the west" is not at all true. I live in the west and I guarantee you that almost nobody has ever heard of this project, and there has not been significant buzz around crypto projects in the west for a good couple of years now.

If you are skeptical, I recommend you go onto reddit and ask people in non-crypto spaces if they have heard of Jelly or are excited about the idea. 

People can make money off crypto: but for the average user it's more or less a casino, where the odds are not in your favour. 

I apologise if this comes off as overly critical, but I have heard of a lot of people who have fallen victims to scammers and scoundrels in the crypto space, and I don't want you to be one of them. 

1-4 is only unreasonable because you've written a strawman version of 4. Here is a version that makes total sense:

1. You make a superficially compelling argument for invading Iraq

2. A similar argument, if you squint, can be used to support invading Vietnam

3. This argument for invading vietnam was wrong because it made mistakes X, Y, and Z

4. Your argument for invading Iraq also makes mistakes X, Y and Z

5. Therefore, your argument is also wrong. 

Steps 1-3 are not strictly necessary here, but they add supporting evidence to the claims. 

As far as I can tell from the article, they are saying that you can make a counting argument that argues that it's impossible to make a working SGD model. They are using this a jumping off point to explain the mistakes that would lead to flawed counting arguments, and then they spend the rest of the article trying to prove that the AI misalignment counting argument is making these same mistakes. 

You can disagree with whether or not they have actually proved that AI misalignment made a comparable mistake, but that's a different problem to the one you claim is going on here. 

This again seems like another "bubble" thing. The vast majority of conservatives do not draw a distinction between USAID and foreign aid in general. And I would guess they do associate foreign aid with "woke", because "woke" is a word that is usually assigned based on vibes alone, for the things perceived as taking away from the average american to give to some other minority. Foreign aid involves spending american money to help foreigners, it's absolutely perceieved as "woke". 

Look, I wish we lived in a world where people were rational and actually defined their terms and made their decisions accordingly, but that's not the world we live in. 

I don't think foreign aid is at risk of being viewed as woke. Even the conservative criticisms of USAID tend to focus on things that look very ideological and very not like traditional foreign aid.

This just isn't true. Yes, exaggerated claims of "wastefulness" are one of the reasons they are against it, but there are many more who are ideologically opposed to foreign aid altogether. 

I can link you to this exchange I had with a conservative, where they explictly stated that saving the lives of a billion foreigners would not be worth increasing the national deficit by 4%, because they are ideologically opposed to american taxpayer money saving foreign lives, no matter how efficiently they do it. Or see the insanely aggressive responses to this seemingly innocuous scott alexander tweet. Or here is a popular right wing meme specifically mocking liberals for having large moral circles. 

I suspect that you are in a bubble, where the conservatives you know are fine with foreign aid, so you extend that to the rest of conservatives. But in a broader context, 73% of republicans want to cut foreign aid, while only 33% of democrats do. 

Load more