Fods12

Hi, my name is James Fodor. I am a longtime student and EA organiser from Melbourne. I love science, history, philosophy, and using these to make a difference in the world.

Wiki Contributions

Comments

Concern about the EA London COVID protocol

Great point about ventilation. I am not aware of any evidence that hand sanitisation in particular is merely 'safety theater'. Surface transmission may not be the major method of viral spread, but it still is a method, and hand sanitisation is a very simple intervention. Also, to emphasise something I mentioned in the post, masks are definitely not 'safety theater'. It is good to see that the revised COVID protocol now mentions that mask use will be encouraged and widely available.

Concern about the EA London COVID protocol

I don't understand how Australia's travel policy is relevant. I'm not asking for anything particularly unusual or onerous, I just would expect that a community of effective altruists would follow WHO guidelines regarding methods to reduce the spread of COVID. I honestly don't understand the negative reaction.

Concern about the EA London COVID protocol

Thanks Amy, I think these clarifications significantly improve the policy. I disagree on the decision not to mandate masks but I understand there will be differences in views there. However mentioning that they are encouraged may be just as effective at ensuring widespread use. That was part of my original concern, that I did not feel this aspect of norm-setting was as evident in the original version of the policy.

DontDoxScottAlexander.com - A Petition

It doesn't seem to me this has much relevance to EA.

Running Effective Altruism Groups: A Literature Review

Hi David,

We deliberately only included information which is based on some specific empirical evidence, not simply advice or recommendations. Of course readers of the review may wish to incorporate additional information or assumptions in deciding how they will run their groups then of course they are welcome to do so.

If you have any particular sources or documents outlining what has been effective in London I'd love to see them!

Effective Altruism is an Ideology, not (just) a Question

Hi everyone, thanks for your comments. I'm not much for debating in comments, but if you would like to discuss anything further with me or have any questions, please feel free to send me a message.

I just wanted to make one clarification that I feel didn't come across strongly in the original post. Namely, I don't think its a bad thing that EA is an ideology. I do personally disagree with some commonly believed assumptions or methodological preferences etc, but the fact that EA itself is an ideology I think is a good thing, because it gives EA substance. If EA were merely a question I think it would have very little to add to the world.

The point of this post was therefore not to argue that EA should try to avoid being an ideology, but that we should realise the assumptions and methodological frameworks we typically adopt as an EA community, critically evaluate whether they are all justified, and then to the extent they are justified defend them with the best arguments we can muster, of course always remaining open-minded to new evidence or arguments that might change our minds.

Effective Altruism is an Ideology, not (just) a Question

People who aren't "cool with utilitarianism / statistics / etc" already largely self-select out of EA. I think my post articulates some of the reasons why this is the case.

Critique of Superintelligence Part 5

Thanks for the comment!

I agree that the probabilities matter, but then it comes to a question of how these are assessed and weighed against each other. On this basis, I don't think it has been established that AGI safety research has strong claims to higher overall EV than other such potential mugging causes.

Regarding the Dutch book issue, I don't really agree with the argument that 'we may as well go with' EV because it avoids these cases. Many people would argue that the limitations of the EV approach, such as having to give a precise probability for all beliefs and not being able to suspend judgement etc, also do not fit with our picture of 'rational'. Its not obvious why hypothetical better behaviours are more important than these considerations. I am not pretending to resolve this argument but I am just trying to raise the issue as being relevant for assessing high impact, low probability events - EV is potentially problematic in such cases and we need to talk about this seriously.

Critique of Superintelligence Part 4

Hi Zeke,

I give some reasons here why I think that such work won't be very effective, namely that I don't see how one can achieve sufficient understanding to control a technology without also attaining sufficient understanding to build that technology. Of course that isn't a decisive argument so there's room for disagreement here.

Critique of Superintelligence Part 3

Hi Zeke!

Thanks for the link about the Fermi paradox. Obviously I could not hope to address all arguments about this issue in my critique here. All I meant to establish is that Bostrom's argument does rely on particular views about the resolution of that paradox.

You say 'it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute'. Respectfully I just don't agree. It all hinges on what is meant by 'motivation' and 'final goal'. You also say " it just seems clear that you can program an AI with a particular goal function and that will be all there is to it", and again I disagree. A narrow AI sure, or even a highly competent AI, but not an AI with human level competence in all cognitive activities. Such an AI would have the ability to reflect on its own goals and motivations, because humans have that ability, and therefore it would not be 'all there is to it'.

Regarding your last point, what I was getting at is that you can change a goal by explicitly rejecting a goal and choosing a new one, or by changing one's interpretation of an existing goal. This latter method is an alternative path by which an AI could change its goals in practise even if it still regarded itself as following the same goals it was programmed with. My point isn't that this makes goal alignment not a problem. My point was that this makes the 'AI will never change its goals' not a plausible position.

Load More