All of William_S's Comments + Replies

One tool that I think would be quite useful is having some kind of website where you gather:

  1. Situations: descriptions of decisions that people are facing, and their options
  2. Outcomes: the option that they took, and how they felt about it after the fact

Then you could get a description of a decision that someone new is facing and automatically assemble a reference class for them of people with the most similar decisions and how they turned out. Could work without any ML, but language modelling to cluster similar situations would help.

Kind of similar information... (read more)

2
Ozzie Gooen
2y
Good idea.  I think it's difficult to encourage people to write a huge amount of data on a website like that. Maybe you could scrape forums or something to get information. I imagine that some specific sorts of decisions will be dramatically more tractable to work on than others. 

Appreciate that point that they are competing for time (as I was only thinking of monopolies over content).

If the reason it isn't used is that users don't "trust that the system will give what they want given a single short description", then part of the research agenda for aligned recommender systems is not just producing systems that are aligned, but systems where their users have a greater degree of justified trust that they are aligned (placing more emphasis on the user's experience of interacting with the system). Some of this research could potentially take place with existing classification-based filters.

2
IvanVendrov
5y
Agreed that's an important distinction. I just assumed that if you make an aligned system, it will become trusted by users, but that's not at all obvious.

While fully understanding a user's preferences and values requires more research, it seems like there are simpler things that could be done by the existing recommender systems that would be a win for users, ie. facebook having a "turn off inflammatory political news" switch (or a list of 5-10 similar switches), where current knowledge would suffice to train a classification system.

It could be the case that this is bottlenecked by the incentives of current companies, in that there isn't a good revenue model for recommender systems other ... (read more)

My mental model of why Facebook doesn't have "turn off inflammatory political news" and similar switches is because 99% of their users never toggle any such switches, so the feature won't affect any of the metrics they track, so no engineer or product manager has an incentive to add it. Why won't users toggle the switches? Part of it is laziness; but mostly I think users don't trust that the system will faithfully give them what they want based on a single short description like "inflammatory political news" -what if they miss out on an important national

... (read more)

If we want to maximize flow-through effects to AI Alignment, we might want to deliberately steer the approach adopted for aligned recommender systems to one that is also designed to scale to more difficulty problems/more advanced AI systems (like Iterated Amplification). Having an idea become standard in the world of recommender systems could significantly increase the amount of non-saftey researcher effort put towards that idea. Solving the problem a bit earlier with a less scalable approach could close off this opportunity.

I wonder how much of the interview/work stuff is duplicated between positions - if there's a lot of overlap, then maybe it would be useful for someone to create the EA equivalent of TripleByte - run initial interviews/work projects with a third party organization to evaluate quality, pass along to most relevant EA jobs.

I agree with this. It seems like the world where Moral Circle Expansion is useful is the world where:

The creators of AI are philosophically sophisticated (or persuadable) enough to expand their moral circle if they are exposed to the right arguments or work is put into persuading them.

They are not philosophically sophisticated enough to realize the arguments for expanding the moral circle on their own (seems plausible).

They are not philosophically sophisticated enough to realize that they might want to consider a distribution of arguments that they could h... (read more)

I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

Why do you think this is the case? Do you think there is an alternative reflection process (either implemented by an AI, by a human society, or combination of both) that could be defined that would reliably lead to wide moral circles? Do you have any thoughts on what would it look like?

If we go through some kind of reflection process to deter... (read more)

I think that there's an inevitable tradeoff between wanting a reflection process to have certain properties and worries about this violating goal preservation for at least some people. This blogpost is not about MCE directly, but if you think of "BAAN thought experiment" as "we do moral reflection and the outcome is such a wide circle that most people think it is extremely counterintuitive" then the reasoning in large parts of the blogpost should apply perfectly to the discussion here.

That is not to say that trying to fine tune reflect... (read more)

I've talked to Wyatt and David, afterwards I am more optimistic that they'll think about downside risks and be responsive to feedback on their plans. I wasn't convinced that the plan laid out here is a useful direction, but we didn't dig into it into enough depth for me to be certain.

Seems like the main argument here is that: "The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns."

One concern about this is that "getting in early in the debate" might move up the time that the debate happens or becomes serious, which could be harmful.

An alternative approach would be to simply build latent capaci... (read more)

3
WyattTessari
6y
Indeed. Getting in early in the debate also means taking on extra responsibility when it comes to framing and being able to respond to critics. It is not something we take lightly. Our current strategy is to start with technological unemployment and experiment, build capacity & network with that first before taking on ASI, similar to your suggestion. This also fits with the election cycle here as there is a provincial election in Ontario in 2018 (which has more jurisdiction over labour policies) before the federal one in 2019 (where foreign policy/global governance is addressed). The challenge remains that no one knows when the issue of ASI will become mainstream. There are rumours of an "Inconvenient Truth"-type documentary on ASI coming out soon, and with Elon Musk regularly making news and the plethora of books & TED talks being produced, no one has the time to wait for a perfect message, team or strategy. Some messiness will have to be tolerated (as is always the case in politics).

Thanks for the Nicky Case links

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. http://web.stanford.edu/~dbroock/published%20pape... (read more)

1
aggg
7y
I've been thinking about this as well lately, specifically in terms of reducing hatred and prejudice (racism, sexism, etc). For example, this is anecdotal, but one (black) man named Daryl Davis says that he has gotten more than 200 KKK members to disavow the group by simply approaching them and befriending them. Over time they would realize that their views were unfounded, and gave up their KKK membership of their own volition. This is an interview with Davis: http://www.npr.org/2017/08/20/544861933/how-one-man-convinced-200-ku-klux-klan-members-to-give-up-their-robes and I think there is also a documentary about him. This is a great Vox article about a study that discusses ways to reduce people's biases: https://www.vox.com/identities/2016/11/15/13595508/racism-trump-research-study. The article title is about reducing racism, though the study discussed is about views on transgender people. It suggests that just a 10-min, open conversation can significantly reduce people's biases, and that these changes persist. And lastly, another anecdotal story on how Derek Black, the godson of David Duke, and the son another very prominent figure in the alt-right, ended up leaving the alt-right after a group of diverse college classmates befriended him, and he slowly abandoned his previous views over the course of months. While two of these links are to anecdotal stories, I think they are important in showing that even those with really extreme prejudice (KKK members and a young alt-right leader!) can let go of their prejudices when approached in the right way. It definitely seems like an intervention that would require lots of grassroots, individual action, I suspect it could be very hard to measure the benefits of it - the amount of lives lost to this kind of prejudice and polarization is pretty low (at least in the US), and the other benefits that would arise are hard to measure. If someone else has good estimates on how impactful this would be, I'd love to hear them! Reg
0
Geoffrey Miller
7y
Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum: https://heterodoxacademy.org/resources/viewpoint-diversity-experience/
2
rhys_lindmark
7y
Nice link! I think there's worthwhile research to be done here to get a more textured ITN. On Impact—Here's a small example of x-risk (nuclear threat coming from inside the White House): https://www.vanityfair.com/news/2017/07/department-of-energy-risks-michael-lewis. On Neglectedness—Thus far it seems highly neglected, at least at a system-level. hifromtheotherside.com is one of the only projects I know in the space (but the founder is not contributing much time to it) On Tractability—I have no clue. Many of these "bottom up"/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement). 1. There's a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I'm tempted by pushing an EA mindset "outcome-izing/RCT-ing" the efforts in the space. So even if it doesn't score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions. 2. This is highly related to the timewellspent.io movement that Tristan Harris (who was at EAGlobal) is pushing. 3. I feel like we need to differentiate between the "political-level" and the "community-level". 4. I'm tempted to think about this from the "communities connect with communities" perspective. i.e The EA community is the "starting node/community" and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here http://blog.ncase.me/the-other-side/). 5. Another version of this could be "scale the CFAR community". 6. I think this could be related to Land Use Reform (https://80000hours.org/problem-profiles/land-use-reform/) and how we construct empathetic communities with a variety of people. (Again, see Nicky Case — http://ncase.me/polygons/)

I'm not saying these mean we shouldn't do geoengineering, that they can't be solved or that they will happen by default, just that these are additional risks (possibly unlikely but high impact) that you ought to include in your assessment and we ought to make sure that we avoid.

Re coordination problems not being bad: It's true that they might work out, but there's significant tail risk. Just imagine that say, the US unilaterally decides to do geonengineering, but it screws up food production and the economy in China. This probably increases chances of nuc... (read more)

0
turchin
8y
Scientific studies and preparation for GE is probably the longest part of GE, and could and should be done in advance, and it should not provoke war. If real necessity of GE appear, all need technologies will be ready.
0
turchin
8y
Ok, I will add it as risks from geo-ingeneering

Extra risks from geoengineering:

Cause additional climate problems (ie. it doesn't just uniformly cool planet. I recall seeing a simulation somewhere where climate change + geoengineering did not equal no change, but instead significantly changed rainfall patterns).

Global coordination problems (who decides how much geoengineering to do, compensation for downside, etc.). This could cause a significant increase in international tensions, plausibly war.

Climate Wars by Gwynne Dyer has some specific negative scenarios (for climate change + geoengineering) https... (read more)

0
turchin
8y
But if we stop emissions now GW will probably continue to exist for around 1000 years as I read somewhere, and even could jump because cooling effects of soot will stop. Global coordination problems also exist, but may be not so annoying. In first case punishment comes for non-cooperation, and in second - for actions, and actions always seems to be more punishable.

It might be useful to suggest Technology for Good as, ie, a place where companies with that focus could send job postings, and have them seen by people who are interested in working on such projects.

This is probably not answerable until you've made some significant progress in your current focus, but it would be nice to get a sense of how well the pool of people available to work on technology for good projects lines up with the skills required for those problems (for example, are there a lot of machine learning experts who are willing to work on these problems, but not many projects where that is the right solution? Is there a shortage of, say, front-end web developers who are willing to work on these kinds of projects?).

0
Michael_PJ
8y
Working out what skills are needed for the problems is absolutely something we want to find out. I don't know whether we can really effectively survey the pool of available talent, but we will hopefully be able to help individuals make decisions by telling them that, e.g. machine learning skills are particularly likely to be applicable to high-impact solutions

Another way of thinking about this is that in an overdetermined environment it seems like there would be a point at which the impact of EA movement building will be "causing a person to join EA sooner" instead of "adding another person to EA" (which is the current basis for evaluating EA movement building impact), which would be much less valuable.

1
tomstocker
9y
From my point of view, I can't tell that EA wont be a distraction from already altruist and effective people in all cases, especially now as there are more people than direct-enough projects.

What sort of feedback signals would we get if EA was currently falling into a meta-trap? What is the current state of those signals?

1
tomstocker
9y
lots of excitement, little in the way of new or surprising successes.
4
Peter Wildeford
9y
Two ideas might be: 1.) Amount of money going to meta-stuff vs. amount of money going to object-level stuff. What is the total budget of CEA + GWWC + 80K + Charity Science + TLYCS + ACE + ...? What is the total amount of money being moved to GiveWell top charities + MIRI + ACE top charities + ...? 2.) Average meta-level of meta-stuff. Is a typical project one level above (e.g., Charity Science fundraising for GiveWell top charities) or 2+ levels above (e.g., CEA funding a team to assess the EA movement for gaps in meta-orgs)? What is the level weighted by total funding per project?

In response to this article, I followed the advice in 1) and thought about where I'd donate in the animal suffering cause area, ending up donating $20 to New Harvest.

Idea: allow people to sign up to a list. Then, every (week/2 weeks/month) randomly pair up all people on the list and suggest they have a short Skype conversation with the person they are paired with.

80k now has career profiles on doing Software Engineering, Data Science and a Computer Science PhD. I'm in a position where I could plausibly pursue any of these. What is the ratio of effective altruists currently pursuing each of these options, and where do you think adding an additional EA is of most value? (Having information this information on the career profiles might be a nice touch)

0
Benjamin_Todd
9y
Software engineers most common by some margin I think. Then data science and compsci phd; not sure which is more common. We're not aiming the profiles at just at EAs, so aren't including this info right now. I hope to have blog posts about EA skill shortages in the future though. If you especially care about AI, then I reckon lean towards the compsci phd.
1
RyanCarey
9y
There are more EA software engineers than data scientists. Seems pretty person-dependent though. Do you like math? Would you prefer research or industry? If you've done an honours or masters already, that might give you an idea of whether you'd like a PhD. Which skills do you lack in order to be able to work at a CS/data science area like machine learning?

Are there any areas of the current software industry that developing expertise in might be useful to MIRI's research agenda in the future?

I wonder if delaying donations might play a role as a crude comparison of room for more funding between different EA organizations, or for a desire to keep all current EA organizations afloat. A donor who wants to support EA organizations but is uncertain about which provides the most value might chose the heuristic "donate to the EA organization that is farthest from their fundraising target at the end of their fundraiser". If this is the case, providing better information for comparing EA organizations might help. Or, a "EA Meta-Organizati... (read more)

Would it work to run shorter fundraisers? If it's the case that most donation money is tied up in this dynamic, then running a shorter fundraiser wouldn't significantly reduce the amount of money raised (of course, that might not be true)

Maybe price in the cost of staff time spent on the fundraiser - that is, if everyone donates immediately, it takes $X to fill the fundraiser. But if everyone donates at the end, it takes $X + $Y, where $Y is the cost of additional staff time spend on the fundraiser.

I wonder if there's a large amount of impact to be had in people outside of the tail trying to enhance the effectiveness of people in the tail (these might look like being someone's personal assistant or sidekick, introducing someone in the tail to someone cool outside of the EA movement, being a solid employee for someone who founds an EA startup, etc.)? Being able to improve impact of someone in the tail (even if you can't quantify what you accomplished) might avert the social comparison aspect, as one would feel like they'd be able to at least take partial credit for the accomplishments of the EA superstars.

0[anonymous]9y
If this was a solution to some of the issues which the OP raised, it would only be a solution for a small number of us. All the same, I think you're right that having 'sidekick' roles could be very valuable! It would be worth checking out this discussion: http://effective-altruism.com/ea/dl/i_am_samwise_link/

One approach to this could be tying your self-esteem into something other than your personal impact. You might try setting your goal to "be an effective altruist" or "be a member of the effective altruist tribe". There are reasonable and achievable criteria (ie. the GWWC pledge) for this, and performance of people on the tail in no way effects your ability to pass these criteria. And, while trying to improve one's own impact is a thing that effective altruists do, it's not necessary to do or to achieve any specific criteria of success t... (read more)

Maybe the status issues in the "lottery ticket" fields could be partially alleviated by having a formal mechanism of redistributing credit for success according to the ex-ante probabilities - for the malaria vaccine example, you could create something like impact certificates covering the output of all EAs working in the area, and distribute them according to an ex-ante estimate of each researcher's usefulness, or some other agreed on distribution. In that case, you would end up with a certificate saying you own x% of the discovery of the malaria vaccine, which would be pretty cool to have (and valuable to have, if a the impact certificate market takes off).

If anyone is ever at a point where they are significantly discouraged by thoughts along these lines (as I've been at times), there's an Effective Altruist self-help group where you can find other EAs to talk to about how you're feeling (and it really does help!). The group is hidden, but if you message me, I can point you in the right direction (or you can find information about it on the sidebar of the Effective Altruist facebook group).

I haven't heard of anything like this. It's the sort of thing that might feel less important than identifying/supporting top charities to most EAs. It might also require some expertise both in the area of the charity and in EA, to actually provide value. It's the sort of thing that might be a good fit for someone with, say, a commitment to an existing organization, but with an interest in EA.

Another application of the Effectiveness-alone strategy might be to create an EA organization aiming to improve the effectiveness of charities by applying EA ideas (as opposed to evaluating charities to find the best ones).

0
Stefan_Schubert
9y
That is a very good idea! I suppose someone must have tried that?

When considering working for a startup/company with significant positive externalities, would it be far off to estimate your share of impact as (estimate of total impact of the company vs. the world where it did not exist) * (equity share of company)?

This seems easier to estimate than your impact on company as a whole, and matches up with something like the impact certificate model (equity share seems like the best estimate we would have of what impact certificate division might look like). It's also possible that there are distortions in allocation of mo... (read more)

1
Owen Cotton-Barratt
9y
I think this is a good starting point for estimating share of externalities in a start-up (particularly the expected externalities that will be caused if the start-up is very successful). I don't think it will be all that accurate, for the kind of reasons you mention, but it has the major advantages that it is easy to measure and somewhat robust. I expect that replaceability means that it tends to be an overestimate, but typically by less than an order of magnitude. A warning, though: replaceability can operate on the level of startups as well as the level of jobs. You should consider that if your start-up weren't very successful in the niche it's going for, then someone else might be (even if they're a bit less good). This will tend to make the externalities of the whole company smaller than they first appear.

For people who have worked in the technology sector, what form has the most useful learning come in? (ie. learning from school, learning while working on a problem independently, learning while collaborating with people, learning from reading previous work/existing codebases, etc.)?

1
Ben Kuhn
9y
I learned a ton of useful statistics and machine learning by reading textbooks. So far that's been my best source.
2
Peter Wildeford
9y
When first starting out: learning while collaborating with people. When going from beginner to intermediate: learning while working on a problem independently. When going from intermediate to expert: learning from reading previous work/existing codebases

It seems like the way to make the most money from working in tech jobs would be to find identifying startups/companies that are likely to do well in the future, work with them, and make money from the equity you get. For example, Dustin Moskovitz suggests that you can get a better return from trying to be employee #100 at the next Facebook or Dropbox than by being an entrepreneur Any thoughts on how to identify startups/companies likely to do well/be valuable to work for, or at least rule out ones likely to fail? (It seems like the problem of doing this fr... (read more)

1
Ben_West
9y
1. It strikes me as implausible that the best way to make money in technology is to try to make the most money right away. So I would go a step further back than you and look for good learning opportunities rather than ones which will make a lot of money. 2. To this end, I think a company's degree of sponsorship is often underrated when people are making decisions. I wrote more about this here. 3. Startups are an extremely mixed bag when it comes to learning opportunities. You will be given a lot more responsibility than you would in a big company, but you also receive less mentoring and you will have to do a lot more grunt work that would be outsourced to lower skilled people in a big company. My guess is that the average big company is a better employer than the average startup for people who are earning to give. 4. To the extent you do want to join a profitable startup, I would guess that it's very hard to outperform professional investors. So if they've recently raised funding it's probably best to just assume that valuation is correct, but it could be tricky if they haven't raised recently. If they haven't raised recently because they've got a good cash flow, you could look at EBITDA or revenue multiples; if they haven't raised recently because they can't find investors then that's probably a red flag.
1
Ben Kuhn
9y
Employee #100 seems a bit implausible. If you joined Dropbox as employee #100 it would be in early 2012, at which point they had just gotten a $4B valuation. It's only gone up 2.5x since then--a mere 35% per year--so you probably wouldn't have done better than a founder over the equivalent timespan. Especially once you take into account the many worse options that were in Dropbox's reference class in 2012, like Fab. That said, I agree that trying to forecast startups is probably a useful exercise--and maybe even possible to do historically, if you're interested in ones as high-profile as Dropbox. It's an open question to me how efficient the market is here (i.e., are companies with semi-obvious predictors of success likely to offer less equity).

What skills/experience do you think will be useful to have in 3-5 years, either in general or for EA plots?

2
RyanCarey
9y
Lots of different skills for lots of different careers: In general, as you advance your career, management and sales skills are fairly useful and transferable. Being experienced and expert in any domain is useful. If you want to do EA research, then academic skills are handy. In tech and research, programming looks useful. If you care about tech and know some maths, then machine learning looks like a good and growing area. That's just off the top of my head.

I also have had negative experiences with career search stuff (more around making decisions). My suggestion, that I'm also going to try, is find someone else who you can help support you through the career search process, who you can talk over decisions with, get to look over applications, maybe help talk you through the time you spend feeling useless before applying. This could also help keep you from settling with an inferior job, if you have to justify it to someone else.

I would also suggest, from experience, to avoid committing to a job at a time when ... (read more)

I wonder what you would get if you offered a cash prize to whoever wrote the "best" criticism of EA, according some criteria such as the opinion of a panel of specific EAs, or online voting on a forum. Obviously, this has a large potential for selection effects, but it might produce something interesting (either in the winner, or in other submissions that don't get selected because they are too good).

0
RyanCarey
9y
Might be better to put up a cash prize for a suggested improvement rather than a critique then but maybe that's me being weak-spirited.

I would like to note (although I don't quite know what to do with this information) that the proposed method of gathering feedback leaves out at least 3 billion people who don't have internet access. In practice, it's probably also limited to gathering information from countries/in languages with at least some EA presence already (and mainly English speaking). Now, from a "optimize spread of EA ideas" perspective, it might be reasonable to focus in wealthier countries to reach people with more leverage (ie. expected earnings), but there are reaso... (read more)

Object level suggestion for collecting diverse opinions (for a specific person to look through, to make it easier to see trends): have something like a google form where people can report characteristics of an attempt to bring up EA ideas to a person or audience, and report comments on how the ideas were received. (This thread is a Schelling Point now, but won't remain so in the future)

When considering a controversial political issue, an EA should also think about whether there are positions to take that differ from those typically presented in the mainstream media. There might be alternatives that EA reasoning opens up that people traditionally avoid because they, for example, stick to deontological reasoning and believe that either an act is right or it is wrong in all cases, and that these restrictions should be codified into law.

For the object level example raised in the article, the traditional framing is "abortion should be le... (read more)

I think if you want people to think about the meta-level, you would be better off with a post that says "suppose you have an argument for abortion" or "suppose you believe this simple argument X for abortion is correct" (where X is obviously a strawman, and raised as a hypothetical), and asks "what ought you do based on assuming this belief is true". There may be a less controversial topic to use in this case.

If you want to start an object level on abortion (which, if you believe this argument is true, it seems you ought to), ... (read more)

While I don't think I would actually write a whole post for this, I might have a couple quick ideas to throw in a comments section. I'd suggest explicitly asking for comments and half-formed ideas in the summary post, and see if it produces anything interesting.

As a consideration for, there may be behaviours in the founder-VC relationship that negatively impact the founders (comes up in http://paulgraham.com/fr.html), such as trying to hold off committing as long as possible. EA VCs could try to bypass these to improve odds of startup success.

As a consideration against, the Halo Effect might cloud judgement around odds of success for EA entrepreneurs from the point of EA investors.

Something in developing world entrepreneurship that gives you a good position to spot opportunities for/carry out other developing world entrepreneurship.

If this turns out to something people find useful, it might also be useful to have people who watch the wiki and provide feedback/advice on the proposed study designs, or who can help people who are less familiar with study design and statistics to produce something useful. This provides an additional service along with the preregistration, so it isn't just an extra onerous task. (I'd be willing to do this if it seems useful).

I'm somewhat doubtful that this experiment registry will attract a lot of use, but +1 for setting it up to try it out.

I know someone who would be interested in looking through a list of organizations like this right now (hoping to find places to work).

A couple examples I've run across: DataWind (http://en.wikipedia.org/wiki/DataWind), which is now at a more mature stage. Went to a talk by one of the founders recently. They made a really cheap tablet and internet services that work over 2G, which opens up the market of large sections of India currently without internet access. I think they could end up being quite successful.

A early stage example is EyeCheck (http://www.eyechecksolutions.com/), started by a couple of engineers out of undergrad. They're developing a tool to improve diagnosis of vision pro... (read more)