I'm not exactly sure what this job actually is based on the forum post. Based on a link in the form the duties might be:
Hello, to clarify #1 I would say:
It could be the case that future AI systems are conscious by default, and that it is difficult to build them without them being conscious.
Let me try to spell out my intuition here:
If many organisms have property X, and property X is rare amongst non-organisms, then property X is evolutionarily advantageous.
Consciousness meets this condition, so it is likely evolutionarily advantageous.
The advantage that consciousness gives us is most likely something to do with our ability to reason, adapt behaviour, control our at
I think that AI welfare should be an EA priority, and I'm also working on it. I think this post is a good illustration of what that means, 5% seems reasonable to me. I also appreciate this post, as it has many of the core motivations for me. I recently spent several months thinking hard about the most effective philosophy PhD project I could work on, and ended up thinking that it was to work on AI consciousness.
I feel like this post is missing discussion of two reasons to build conscious AI:
1. It may be extremely costly or difficult to avoid (this may not be a good reason, but it seems plausibly like why we would do it).
2. Digital minds could have morally valuable conscious experiences, and if there is very many of them, this could be extremely good (at least on some, admittedly controversial ethical theories).
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks.
2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
Looking at this paper now, I'm not convinced that Erdil and Besiroglu offer a good counter argument. Let me try to explain why and see if you disagree.
Their claim is about economic growth. It seems that they are exploring considerations for and against the claim that future AI systems will accelerate economic growth by an order of magnitude or more. But even if this was true, it doesn't seem like it would result in a significant chance of extinction.
The main reason for believing the claim about economic growth doesn't apply to stronger versions of th...
This is a cool idea, I upvoted (from -1 to 0). I'd really like to see more detailed analysis and well-documented sources to answer this question. I couldn't really tell from the post whether the numbers were about right or not.
I worked on a report with others at Longview. We calculated that severely reducing the burden of HIV, Malaria and Tuberculosis would be around $219 billion. Essentially, we adjusted numbers from a variety of reports from e.g. the WHO to estimate these numbers (possibly some of the same sources, it's hard to tell). This is...
I'm sorry to hear treatments generally haven't helped in the past.
I sometimes find it useful to think about these things in the following way. It feels like a lot to sacrifice energy to do therapy when you're already limited in terms of energy. But if it works particularly well, maybe you'll have something like an extra day of energy a week for... well for your whole life. It might be worth doing even if it takes a lot now, and even if the odds of success are low. (Of course, in some cases the odds are so low that it isn't worth it).
I don't know much about the specifics here, my own experience has been with anxiety, depression and adhd.
One piece of advice is this: try all of the things that might help with anxiety, depression, and ME/CFS. This was mentioned by @John Salter in another comment, but it doesn't just apply to starting organisations. It is a worthwhile investment to try a range of things that might work on almost any future career path you would pursue. (So long as you don't pay severe costs if they fail).
These lists are okay as a starting point for anxiety and depression.
There are other goals you could adopt.
To learn and develop your own thinking. If that's your goal, it doesn't matter as much whether you share it, or the reception it gets.
To share important ideas. If you're absorbing a lot of your content from the EA forum, try writing somewhere else. Other people may not have been exposed to these ideas, so you might be able to do more to improve the average quality.
My personal hot-take is that most people should write for a different audience than than themselves. My own ideas often feel stale and obvi...
I don't think my comment is likely to be all that useful, but putting it here anyway.
I personally find it difficult to pay attention to podcasts with more than 2 people. I tried to listen to the first episode for about 30 minutes and this one for about 5 minutes, and I couldn't comfortably follow them while paying attention to other tasks (walking around, cleaning, cooking etc.).
I think it's likely that more diversity in the space is good though, as many of the most popular podcasts I see on e.g. Youtube tend to be more than two people. I suspe...
More EA success stories:
Pandemics. We have now had the first truly global pandemic in decades, perhaps ever.
Nuclear war. Thanks to recent events, the world is closer than ever to a nuclear catastrophe.
It's not all good news though. Unfortunately, poverty seems to be trending down, there's less lead in the paint, and some say AI could solve most problems despite the risks.
I feel like these actions and attitudes embody many of the virtues of effective altruism. You really genuinely wanted to help somebody, and you took personally costly actions to do so. I feel great about having people like you in the EA Community. My advice is to keep the feeling of how important you were to Tlalok's life as you do good effectively with other parts of your time and effort, knowing you are perhaps making a profound difference in many lives.
I really like time shifter but honestly the following has worked better for me:
Fast for ~16 hours prior to 7am in my new time-zone.
Take melatonin, usually ~10pm in my new timezone and again if I wake up and stop feeling sleepy before around 5am in my new timezone. (I have no idea if this second dosing is optimal but it seems to work).
I highly recommend getting a good neck pillow, earplugs, and eye mask if you travel often or on long trips (e.g. if you are Australian and go overseas almost anywhere).
Thanks to Chris Watkins for suggesting the fasting routine.
I quite like this post. I think though that your conclusion, to use CDT when probabilities aren't affected by your choice and use EDT when they are affected, is slightly strange. As you note, CDT gives the same recommendations EDT in cases where your decision affects the probabilities, so it sounds to me like you would actually follow CDT in all situations (and only trivially follow EDT in the special cases where EDT and CDT make the same recommendations).
I think there's something to pointing out that CDT in fact recommends one boxing wherever your action ...
+1 on Rory Stewart- as well as being the President of GD, he was the Secretary of State for International Development in the UK, has started and run his own charity (I believe with his wife) in the developing world, has mentioned EA previously, is known to be an enjoyable person to listen to (judging by the success of his podcast), and has just released a book- and therefore might be more likely than usual to engage with popular media.
Thanks for posting, I have a few quick comments I want to make:
I recently got into a top program in philosophy despite having clear association with EA (I didn't cite "EA sources" in my writing sample though, only published papers and OUP books). I agree that you should be careful, especially about relying on "EA Sources" which are not widely viewed as credible.
Totally agree that prospects are very bad outside of top 10 and lean towards "even outside of top 5 seriously consider other options"
On the other hand, if you really would be okay with fail
My understanding is that, at a high level, this effect is counterbalanced by the fact that a high rate of extinction risk means the expected value of the future is lower. In this example, we only reduce the risk this century to 10%, but next century it will be 20%, and the one after that it will be 20% and so on. So the risk is 10x higher than in the 2% to 1% scenario. And in general, higher risk lowers the expected value of the future.
In this simple model, these two effects perfectly counterbalance each other for proportional reductions of existenti...
"There are three main branches of decision theory: descriptive decision theory (how real agents make decisions), prescriptive decision theory (how real agents should make decisions), and normative decision theory (how ideal agents should make outcomes)."
This doesn't seem right to me, I would say: an interesting way you can divide up decision theory is between descriptive decision theory (how people make decisions) and normative decision theory (how we should make decisions).
The last line of your description, "how ideal agents should make outcomes" seems es...
This is a fantastic initiative! I'm not personally vegan, but believe the "default" for catering should be vegan (or at least meat and egg free) with the option for participants to declare special diatery requirements. This would lower consumption of animal products as most people just go with the default option, and push the burden of responsibility to the people going out of their way to eat meat.
My entry is called Project Apep, it's set in a world where alignment is difficult, but a series of high profile incidents lead to extremely secure and cautious development of AI. It tugs at the tensions between how AI can make the future wonderful or terrible.
I'm working on a related distillation project, I'd love to have a chat so we can coordinate our efforts! (riley@wor.land)
I agree that regulation is enormously important, but I'm not sure about the following claim:
"That means that aligning an AGI, while creating lots of value, would not reduce existential risk"
It seems, naively, that an aligned AGI could help us detect and prevent other power seeking AGIs. It doesn't completely eliminate the risk, but I feel even a single aligned AGI makes the world a lot safer against misaligned AGI.
That's great, thank you for clarifying Peter!