All of Arepo's Comments + Replies

Impact markets may incentivize predictably net-negative projects

Why? The less scrupulous one finds Anthropic in their reasoning, the less weight a claim that Wuhan virologists are 'not much less scrupulous' carries.

Impact markets may incentivize predictably net-negative projects

Strong disagree. A bioweapons lab working in secret on gain of function research for a somewhat belligerent despotic government, which denies everything after an accidental release is nowhere near any model I have of 'scrupulous altruism'.

Ironically, the person I mentioned in my previous comment is one of the main players at Anthropic, so your second paragraph doesn't give me much comfort.

2Linch5d
I don't understand your sentence/reasoning here. Naively this should strengthen ofer's claim, not weaken it.
4ofer10d
I think that it's more likely to be the result of an effort to mitigate potential harm from future pandemics. One piece of evidence that supports this is the grant proposal, which was rejected by DARPA, that is described in this [https://www.newyorker.com/science/elements/the-mysterious-case-of-the-covid-19-lab-leak-theory] New Yorker article. The grant proposal was co-submitted by the president of the EcoHealth Alliance, a non-profit which is "dedicated to mitigating the emergence of infectious diseases", according to the article.
Impact markets may incentivize predictably net-negative projects

I'm talking about the unilateralist's curse with respect to actions intended to be altruistic, not the uncontroversial claim that people sometimes do bad things. I find it hard to believe that any version of the lab leak theory involved all the main actors scrupulously doing what they thought was best for the world.

I think we should be careful with arguments that such and such existential risk factor is entirely hypothetical.

I think we should be careful with arguments that existential risk discussions require lower epistemic standards. That could backf... (read more)

4ofer10d
I don't find it hard to believe at all. Conditional on a lab leak, I'm pretty confident no one involved was consciously thinking: "if we do this experiment it can end up causing a horrible pandemic, but on the other hand we can get a lot of citations." Dangerous experiments in virology are probably usually done in a way that involves a substantial amount of effort to prevent accidental harm. It's not obvious that virologists who are working on dangerous experiments tend to behave much less scrupulously than people in EA who are working for Anthropic, for example. (I'm not making here a claim that such virologists or such people in EA are doing net-negative things.)
Impact markets may incentivize predictably net-negative projects

Is there any real-world evidence of the unilateralist's curse being realised? My sense historically is that this sort of reasoning to date has been almost entirely hypothetical, and has done a lot to stifle innovation and exploration in the EA space.

7ofer10d
If COVID-19 is a result of a lab leak that occurred while conducting a certain type of experiment (for the purpose of preventing future pandemics), perhaps many people considered conducting/funding such experiments and almost all of them decided not to. I think we should be careful with arguments that such and such existential risk factor [https://forum.effectivealtruism.org/topics/existential-risk-factor] is entirely hypothetical. Causal chains that end in an existential catastrophe are entirely hypothetical and our goal is to keep them that way.
6Linch11d
In 2015, when I was pretty new to EA, I talked to a billionaire founder of a company I worked at and tried to pitch them on it. They seemed sympathetic but empirically it's been 7 years and they haven't really done any EA donations or engaged much with the movement. I wouldn't be surprised if my actions made it at least a bit harder for them to be convinced of EA stuff in the future. In 2022, I probably wouldn't do the same thing again, and if I did, I'd almost certainly try to coordinate a bunch more with the relevant professionals first. Certainly the current generation of younger highly engaged EAs seemed more deferential (for better or worse) and similar actions wouldn't be in the Overton window.
Does the Forum Prize lead people to write more posts?

Another vote against this being a wise metric, here. Anecdotally, while writing my last post when (I thought) the prize was still running, I felt both a) incentivised to ensure the quality was as high as I could make it and b) less likely to actually post as a consequence (writing higher quality takes longer).

And that matches what I'd like to see on the forum - better signal to noise ratio, which can be achieved both by increasing the average quality of posts and by decreasing the number of marginal posts.

How to dissolve moral cluelessness about donating mosquito nets

Unsurprisingly I disagree with many of the estimates, but I very much like this approach. For any analysis of any action, one can divide the premises arbitrarily many times. You stop when you're comfortable that the granularity of the priors you're forming is high enough to outweigh the opportunity cost of further research, which is how any of us can literally take any action.

In the case of 'cluelessness', it honestly seems better framed as 'laziness' to me. There's no principled reason why we can't throw a bunch of resources at refining and parameterising... (read more)

Should large EA nonprofits consider splitting?

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is.

This doesn't sound like most people's view on democracy to me. Normally it's more like 'we have to relinquish control over our lives to someone, so it gives slightly better incentives if we have a fractional say in who that someone is'.

I'm reminded of Scott Siskind on prediction markets - while there mi... (read more)

Revisiting the karma system

Fwiw I didn't downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I'm also finding it hard to parse some of what you say. 

A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.

This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I'll give what I believe is a fairly clear example from my own recent investigation... (read more)

2Charles He1mo
You are right. My mindset writing this comment was bad, but I remember thinking the reply seemed not specific and general, and I reacted harshly, this was unnecessary and wrong. I do not know the details of the orthogonality thesis and can't speak to this very specific claim (but this is not at all refuting you, I am just literally clueless and can't comment on something I don't understand). To both say the truth and be agreeable, it's clear that the beliefs in AI safety are from EAs following the opinions of a group of experts. This just comes from people's outright statements. In reality, those experts are not at the majority of AI people and it's unclear exactly how EA would update or change its mind. Furthermore, I see things like the below, that, without further context, could be wild violations of "epistemic norms", or just common sense. For background, I believe this person is interviewing or speaking to researchers in AI, some of whom are world experts. Below is how they seem to represent their processes and mindset when communicating with these experts. The person who wrote the above is concerned about image, PR and things like initial conditions, and this is entirely justified, reasonable and prudent for any EA intervention or belief. Also, the person who wrote the above is conscientious, intellectually modest, and highly thoughtful, altruistic and principled. However, at the same time, at least from their writing above, their entire attitude seems to be based on conversion—yet, their conversations is not with students or laypeople like important public figures, but the actual experts in AI. So if you're speaking with the experts in AI and adopting this attitude that they are preconverts and you are focused on working around their beliefs, it seems like, in some reads of this, would be that you are cutting off criticism and outside thought. In this ungenerous view, it's a further red flag that you have to be so careful—that's an issue in itse
Revisiting the karma system

For those who enjoy irony: the upvotes on this post pushed me over the threshold not only to 6-karma strong upvotes, but for my 'single' upvoted now being double-weighted.

3Guy Raveh1mo
While most of my comments here "magically" went from score ≥3 to a negative score (so, collapsed by default) over the last few hours, presumably due to someone strongly downvoting them. Including this one [https://forum.effectivealtruism.org/posts/SApmQrKdvgccmH2yF/revisiting-the-karma-system?commentId=Tf7WYcgkTBWtqNLgv] , which I find somewhat puzzling/worrying. I know this comment sounds petty, but I do think it exemplifies the problem. Edit: Charles below made this seem more reasonable.
Revisiting the karma system

Often authors mention the issue, but don't offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it. 

 

You've seriously loaded the terms of engagement here. Any given belief shared widely among EAs and not among intelligent people in general is a candidate for potential groupthink, but qua them being shared EA beliefs, if I just listed a few of them I would expect you and most other forum users to consider them not groupthink - because things we believe ... (read more)

2Charles He1mo
A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon. The most clear defect would be something like “We are funding personal projects of the first people who joined EA and these haven’t gotten a review because all his friends shout down criticism on the forum and the forum self selects for devotees. Last week the director has been posting pictures of Bentleys on Instagram with his charity’s logo”. The most marginal defects would be “meta” and whose consequences are abstract. A pretty tenuous but still acceptable one (I think?) is “we only are getting very intelligent people with high conscientiousness and this isn’t adequate. ”. Right, you say this…but seem a little shy to list the downsides. Also, it seems like you are close to implicating literally any belief? As we both know, the probably of groupthink isn’t zero. I mentioned I can think of up to 15 instances, and gave one example. My current read is that this seems a little ideological to me and relies on sharp views of the world. I’m worried what you will end up saying is not only that EAs must examine themselves with useful and sharp criticism that covers a wide range of issues, but all mechanical ways where prior beliefs are maintained must be removed, even without any specific or likely issue? One pragmatic and key issue is that you might have highly divergent and low valuations of the benefits of these systems. For example, there is a general sentiment worrying about a kind of EA “Eternal September” and your vision of karma reforms are exactly the opposite of most solutions to this (and well, have no real chance of taking place). Another issue are systemic effects. Karma and voting is unlikely to be the root issue of any defects in EA (and IMO not even close). However, we might think it affects “systems of discussion” in pernicious ways as you mention. Yet, sin
Revisiting the karma system

As a datum I rarely look beyond the front page posts, and tbh the majority of my engagement probably comes from the EA forum digest recommendations, which I imagine are basically a curated version of the same.

Revisiting the karma system

'Personally I'd rather want the difference to be bigger, since I find it much more informative what the best-informed users think.'

This seems very strange to me. I accept that there's some correlation between upvoted posters and epistemic rigour, but there's a huge amount of noise, both in reasons for upvotes and in subject areas. EA includes a huge diversity of subject areas each requiring specialist knowledge. If I want to learn improv, I don't go to a Fields Medalist winner or Pulitzer prize winning environmental journalist, so why should the equivalent be true on here?

I think that a fairly large fraction of posts is of a generalist nature. Also, my guess is that people with a large voting power usually don't vote on topics they don't know (though no doubt there are exceptions).

I'd welcome topic-specific karma in principle, but I'm unsure how hard it is to implement/how much of a priority it is. And whether karma is topic-specific or not, I think that large differences in voting power increase accuracy and quality.

Some unfun lessons I learned as a junior grantmaker

That makes sense, though I don't think it's as clear a dividing line as you make out. If you're submitting a research project for eg, you could spend a lot of time thinking about parameters vs talking about the general thing you want to research, and the former could make the project sound significantly better - but also run the risk you get rejected because those aren't the parameters the grant manager is interested in.

Some unfun lessons I learned as a junior grantmaker

'It’s rarely worth your time to give detailed feedback'

This seems at odds with the EA Funds' philosophy that you should make a quick and dirty application that should be 'the start of a conversation'.

9Davidmanheim1mo
Two things. First, there is a big difference between "detailed feedback" and "conversation" - if something is worth funding, working out how to improve it is worth time and effort, and delaying until it's perfect is a bad idea. Whereas If it's fundamentally off base, it isn't worth more feedback than "In general terms, this is why" - and if it's a marginal grant, making it 10% better is worth 10% of a small amount. Second, different grantmakers work differently, and EA funds often engages more on details to help build the idea and improve implementation. But junior grantmakers often aren't qualified to do so!
Sort forum posts by: Occlumency (Old & Upvoted)

I think you're mixing up updates and operations. If I understand you right, you're saying each user on the forum can get promoted at most 16 times, so at most each strong update gets incremented 16  times. 

But you have to count the operations of the algorithm that does that. My naive effort is something like this: Each time a user's rank updates (1 operation), you have to find and update all the posts and users that received their strong upvotes (~N operations where N is either their number of strong upvotes, or their number of votes depending on... (read more)

Sort forum posts by: Occlumency (Old & Upvoted)

To be clear, I'm looking at the computational costs, not algorithmic complexity which I agree isn't huge.

Where are you getting 2x from for computations? If User A has cast strong upvotes to up to N different people, each of who has cast strong upvotes to up to N different people, and so on up to depth D, then naively a promotion for A seems to have O(N^D) operations, as opposed to O(1) for the current algorithm. (though maybe D is a function of N?)

In practice as Charles says big O is probably giving a very pessimistic view here since there's a large gap be... (read more)

9Linch1mo
I retract the <2x claim. I think it's still basically correct, but I can't prove it so there may well be edge cases I'm missing. My new claim is <=16x. We currently have a total of U upvotes. The maximal karma threshold is 16 karma per strong upvote at 500k karma [https://forum.effectivealtruism.org/posts/gNHFRWyo58cTQ8pe8/ea-forum-2-0-initial-announcement-1#A_reworked_karma_system] (and there are no fractional karma). So the "worst case" scenario is if all current users are at the lowest threshold (<10 karma) and you top out at making all users >500k karma, with 16 loops across all upvotes. This involves ~16U updates, which is bounded at 16x. If you do all the changes at once you might crash a server, but presumably it's not very hard to queue and amortize.
EA will likely get more attention soon

I just posted a comment giving a couple of real-life anecdotes showing this effect.

EA will likely get more attention soon

For the last several years, most EA organizations did little or no pursuit of media coverage. CEA’s advice on talking to journalists was (and is) mostly cautionary. I think there have been good reasons for that — engaging with media is only worth doing if you’re going to do it well, and a lot of EA projects don’t have this as their top priority.  

 

I think this policy has been noticeably harmful, tbh. If the supporters of something won't talk to the media, the net result seems to be that the media talk to that thing's detractors instead, and ... (read more)

Sort forum posts by: Occlumency (Old & Upvoted)

But in the process you might also promote other users - so you'd have to check for each recipient of strong upvotes if that was so, and then repeat the process for each promoted user, and so on.

5Charles He2mo
That’s a really good point. There’s many consequent issues beyond the initial update, including the iterative issue of multiple induced “rounds of updating” mentioned in your comment. After some thought, I think I am confident the issue you mentioned is small. * First, note that there is an end point to this process, eg a “fixed point” that the rounds stop. * Based on some guesses, the second and subsequent round of promotions gets much much smaller in number of people affected (as opposed to a process that explodes). This is because the karma and vote power schedule has huge karma intervals between ranks ,compared to the per account karma increase from this process. Also these intervals greatly increase as rank increases (something something concavity) . To be confident, I guess that these second round and after computations are probably <<50% of the initial first round computational cost. Finally, if the above wasn’t true and the increased costs were ridiculous (1000x or something) you could just batch this, say, every day, and defer updates in advanced rounds to later batches. This isn’t the same result, as you permanently have this sort of queue, but I guess it’s a 90% good solution. I’m confident but at the same time LARPing here and would be happy if an actual CS person corrected me.
Sort forum posts by: Occlumency (Old & Upvoted)

Pretty sure that would be computationally intractable. Every time someone was upvoted beyond a threshhold you'd need to check the data of every comment and post on the forum.

6Charles He2mo
Someone I know has worked with databases of varying sizes, sometimes in a low level, mechanical sense . From my understanding, to update all of a person’s votes, the database operation is pretty simple, simply scanning the voting table for that ID and then doing a little arithmetic for each upvote and calling another table or two. You would only need to do the above operation for each “newly promoted” user, which is like maybe a few dozen users a day at the worst. Many personal projects involve heavier operations. I’m not sure but a google search might be 100x more complicated.
Sort forum posts by: Occlumency (Old & Upvoted)

Another concern is karma inflation from strong upvotes. As time goes by, the  strength of new strong upvotes increases (details here), which means more recent posts will naturally tend to be higher rated even given a consistent number of users.

4MichaelStJules2mo
Maybe we should automatically update upvotes to track people's current karma?
3aogara2mo
I agree, upvotes do seem a bit inflated. It creates an imbalance between new and old users that continually grows as existing users rack up more upvotes over time. This can be good for preserving culture and norms, but as time goes on, the difference between new and old users only grows. Some recalibration could help make the site more welcoming to new users. In general, I think it would be nice if each upvote counted for roughly 1 karma. Will MacAskill’s most recent post received over 500 karma from only 250 voters, which might exaggerate the reach of the post to someone who doesn’t understand the karma system. On a smaller scale, I would expect a comment with 10 karma from 3 votes to be less useful than a comment with 10 karma from 5 - 8 votes. These are just my personal intuitions, would be curious how other people perceive it.
1Emrik2mo
Good point! Edited the post to mention this.
A tale of 2.75 orthogonality theses

I just posted a reply to a similar comment about orthogonality + IC here.

A tale of 2.75 orthogonality theses

(Epistemic status of this comment: much weaker than of the OP)

I am suspicious a) of a priori non-mathematical reasoning being used to generate empirical predictions on the outside view and b) of this particular a priori non-mathematical reasoning on the inside view.  It doesn't look like AI algorithms have tended to get more resource grabby as they advance. AlphaZero will use all the processing power you throw at it, but it doesn't seek more. If you installed the necessary infrastructure (and, ok, upgraded the storage space), it  could presumably... (read more)

2Greg_Colbourn2mo
AlphaZero isn't smart enough (algorithmically speaking). From Human Compatible (p.207): From wireheading, it might then go on to resource grab to maximise the probability that it gets a +1 or maximise the number of +1s it's getting (e.g. filling planet sized memory banks with 1s); although already it would have to have a lot of power over humans to be able to convince them to reprogram it by sending messages via the go board! I don't think the examples of humans (Bezos/Witten) are that relevant, in as much as we are products of evolution, and are "adaption executors" rather than "fitness maximisers", are imperfectly rational, and tend to be (broadly speaking) aligned/human-compatible, by default.
Bad Omens in Current Community Building

I don't think background rate is relevant here. I was contesting your claim that 'the people who are most impactful within EA have both high alignment and high competence'. It depends on what you mean 'within EA' I guess. If you mean 'people who openly espouse EA ideas', then the 'high alignment' seems uninterestingly true almost by definition. If you mean 'people who are doing altruistic work effectively' then  Gates and Musk are , IMO, strong enough counterpoints to falsify the claim.

2Linch2mo
There are many/most people who openly espouse EA ideas who I do not consider highly aligned.
Bad Omens in Current Community Building

Maybe I'm just wrong. I only have a lay understanding of GDPR, but my impression was that keeping any data that people had shared with you without their knowledge was getting into sketchy territory.

Creating Individual Connections via the Forum

Pimp: this is very much the sort of stuff we're now trying to facilitate on the Gather Town.

Bad Omens in Current Community Building

When I came to university I had already read a lot of the Sequences ... 

 

You'd read the Sequences but you thought we were a cult? Inconceivable! 

(/sarcasm)

Oddly, while I agree with much of this post (and strong upvoted), it reads  to me as evidencing many of the problems it describes! Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group  jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology (tithes being t... (read more)

Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group  jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology

 

The hero worship is I think especially concerning and is a striking way that implicit/"revealed" norms contradict explicit epistemic norms for some EAs

Bad Omens in Current Community Building

In case anyone isn't aware of it, that's very much the demographic that CEEALAR (aka the EA hotel) is trying to support!

Bad Omens in Current Community Building

They are surprised that somebody interested in EA might be unhappy to discover that the committee members have been recording the details of their conversation in a CRM without asking.

Side note: morality aside, in Europe this is borderline illegal, so seems like a very bad idea.

3Ben_West2mo
Can you clarify why you think it's "borderline illegal"? I assume you are referring to GDPR, but I'm not aware of any reason why the normal "legitimate interest" legal basis wouldn't apply to group organizers.
Bad Omens in Current Community Building

I'm not sure the most impactful people need have high alignment. We've disagreed about Elon Musk in the past, but I still think he's a better candidate for the world's most counterfactually positive human than anyone else I can think of. Bill Gates is similarly important and similarly kinda-but-conspicuously-not-explicitly aligned.

Yes, if you rank all humans by counterfactual positive impact, most of them are not EA, because most humans are not EAs. 

This is even more true if you are mostly selecting on people who were around long before EA started, or if you go by ex post rather than ex ante counterfactual impact (how much credit should we give to Bill Gates' grandmother?)

(I'm probably just rehashing an old debate, but also Elon Musk is in the top 5-10 of contenders for "most likely to destroy the world," so that's at least some consideration against him specifically).

EA and the current funding situation

Sub-hypothesis: the people  who find extravagant spending distasteful are  disproportionately likely to be the people who object to  the billionaires that enable it - and so that spending it isn't what pisses them off so much as what draws their attention to the scenario they dislike.

EA and the current funding situation

But morally-motivated people, especially on college campuses, often find seemingly-extravagant spending distasteful.

 

As far as I can see, no-one else has raised this, but to me the optics of having large sums of money available and not spending it are as bad or worse as spending too freely. Cf Christopher Hitchens' criticism of Mother Teresa - and closer to home, Evan's criticisms a few years ago that EA fund payouts were being granted too infrequently. For what it's worth, I find the latter a much bigger concern.

3Arepo2mo
Sub-hypothesis: the people who find extravagant spending distasteful are disproportionately likely to be the people who object tothe billionaires that enable it [https://www.vox.com/future-perfect/2018/12/17/18141181/foundation-charity-deduction-democracy-rob-reich] - and so that spending it isn't what pisses them off so much as what draws their attention to the scenario they dislike.
EA and the current funding situation

I regret that I have but one strong upvote to give this. Lack of feedback on why some of the projects I've been involved in didn't get funding has been incredibly frustrating.

One further benefit of getting it would have been that it can help across the ecosystem when you get turned down by Funder A and apply to Funder B - if you can pass on the feedback you got from Funder A (and how you've responded to it), that can save a lot of Funder B's time.

As a meta-point, the lack of feedback on why there's a lack of feedback also seems very counterproductive. 

A tale of 2.75 orthogonality theses

'By default' seems like another murky term. The orthogonality thesis asserts (something like) that it's not something you should place a bet at arbitrarily long odds on, but maybe it's nonetheless very likely to work out, because per Drexler, we just don't code AI as an unbounded optimiser, which you might still call 'by default'. 

At the moment I have no idea what to think, tbh. But I lean towards focusing on GCRs that definitely need direct action in the short term, such as climate change, over ones that might be more destructive but where the relevant direct action is likely to be taken much further off.

2Greg_Colbourn2mo
So by 'by default' I mean without any concerted effort to address existential risk from AI, or just following "business as usual" with AI development. Yes, Drexler's CAIS would be an example of this. But I'd argue that "just don't code AI as an unbounded optimiser" is very likely to fail due to mesa-optimisers and convergent instrumental goals emerging in sufficiently powerful systems. Interesting you mention climate change, as I actually went from focusing on that pre-EA to now thinking that AGI is a much more severe, and more immediate, threat! (Although I also remain interested in other more "mundane" GCRs.)
EA coworking/lounge space on gather.town

I had a look at it, but my instinct was the reverse - it feels much more natural to me to walk an avatar through a virtual space than to drag a video feed of my face around.

But if there's a lot of EAs who prefer Spatial Chat, maybe there'd be enough demand to support both at some point. My instinct would be to avoid splitting the space any more just yet, but since these places can all link to each other, over time we could build a linked network of virtual spaces (we already have a 2-way link to an EA VR space, for eg).

What would you desire in an EA dating site?
Answer by ArepoMay 02, 202210

I think the real struggle would be how to get anywhere near enough users to make the app usable - there are hundreds of copycat dating apps which don't place onerous restrictions on can use them and struggle to get traction, and you're talking about opening it to maybe 5-10000 people in the world.

So my first thought would be 'make the category more general'. It's not like I'm only interested in dating other EAs - and I also doubt my profile of partners is particularly typical among EAs, or that there will even be that much commonality in who we prefer to d... (read more)

A tale of 2.75 orthogonality theses

Hi Steven,

To clarify, I make no claims about what experts think. I would be moderately surprised if more than a small minority of them pay any attention to the orthogonality thesis, presumably having their own nuanced views how AI development might pan out. My concern is with the non-experts who make up the supermajority of the EA community - who frequently decide whether to donate their money to AI research vs other causes, who are prioritising deeper dives, who in some cases decide whether to make grants,  who are deciding whether to become experts,... (read more)

My concern is with the non-experts…

My perspective is “orthogonality thesis is one little ingredient of an argument that AGI safety is an important cause area”. One possible different perspective is “orthogonality thesis is the reason why AGI safety is an important cause area”. Your belief is that a lot of non-experts hold the latter perspective, right? If so, I’m skeptical.

I think I’m reasonably familiar with popular expositions of the case for AGI safety, and with what people inside and outside the field say about why or why not to work on AGI safety. And... (read more)

2Linch2mo
Not my field, but my understanding is that using the uniform prior is pretty normal/common for theoretical CS.
My bargain with the EA machine

This isn't a post about careers, it's about moral philosophy! I have been toying with a thought like this for years, but never had the wherewithal to coherently graph it. I'm glad and jealous that someone's finally done it!

No-one 'is a utilitarian' or similar, we're all just optimising for some function of at least two variables, at least one of which we can make a meaningful decision about. I genuinely think this sort of reasoning resolves a lot of problems posed by moral philosophers (eg the demandingness objection), not to mention helps map abstractions about moral philosophy to something a lot more like the real world.

1Emrik2mo
Oh, I like this. Seems good to have a word for it, because it's a set of constraints that a lot of us try to fit our morality into. We don't want it to have logical contradictions. Seems icky. Though it does make me wonder what exactly I mean by 'logical contradiction'.
What makes a statement a normative statement?

You could make a case that it is a normative statement - certainly not everyone would consider it not to be. It would have been clearer if I'd phrased my response as a question: 'would you consider that statement to be normative?'

My sense is that you have a pretty good idea of how philosophers use the word 'normative', and you're pursuing a level of clarity about it that's impossible to obtain. Since it (by definition) doesn't map to anything in the physical or mathematical worlds, and arguably even if it did, it just isn't possible to identify a class of ... (read more)

1Vynn2mo
Yup
What makes a statement a normative statement?

I'm not sure how to interpret 'real' there. If you mean 'real' as opposed to something like a hologram, I'd say the sentence is underdefined. If you mean it as synonymous for a proposition about physical state, such that 'there are two oranges in front of me' would be approximately equivalent to 'the two oranges in front of me are real' , then I think you're asking about any proposition about physical state.

In which case I don't think there's much reason to call them 'normative', no statement can be proven by physical observation, so that would make basically all parseable statements normative, which would make the term useless. Although I'm sympathetic to the idea that it is.

EA coworking/lounge space on gather.town

Can confirm Gathertown allows screensharing - I'm doing it as I type - and we've actually just been setting up some of the desk pods to allow communication with other people in the same pod (you can also cluster round the same desk, though that does feel a bit cramped for more than two).

Btw, I'm hoping that the Discord and Gather servers will have a positive sum effect where they link to each other and collaboratively increase the number of EAs who get into online coworking. We've placed a prominent link to the Discord server near the entrance to the Gather space :)

What makes a statement a normative statement?

Defining a normative statement as 'a statement with a normative "should"' has certain problems...

1Leo2mo
That's true, but that comment was only meant for you, who seemed confused about what kind of 'should' you should use in a normative sentence. I took for granted that you already knew 'normative', because you had posted a nice and useful answer to the original question.
What makes a statement a normative statement?

'If you add 1 to 1 you should get 2' is not a statement people would necessarily consider normative.

1Vynn2mo
Why is it not considered normative? It follows rules of arithmetic. The operation should be carried out according to "correct" procedure and failure to do so results in something "wrong". So why no count as normative?
1Leo2mo
Aristotle would answer "'should' is said in many ways". I was of course thinking of the normative 'should', which I believe is the first that comes to mind when someone asks about normative sentences. But I'd be highly interested in a different kind of counterexample: a normative sentence without a 'should' stated or implied.
What makes a statement a normative statement?

I don't think there's a perfect answer, but as a heuristic I defer to the logical positivists - if you can't even in principle find direct evidence for or against the statement by observing the physical world and you can't mathematically prove it, and on top of that it sounds like a statement about behaviour or action, then you're probably in normland.

1Vynn2mo
would ontological statements which can't be proven by observation also count as normative statements? e.g. I am real, the world is real, I am not real, the self is not real etc.
EA Houses: Live or Stay with EAs Around The World

It's a lovely idea! Do you have an idea of how to keep it up to date, so that old, no-longer-available rows don't detract from the active ones?

Good question! We're planning on pinging listings on the sheets roughly every four months to see if it's still up to date. We also have a column that says when the listing was last updated.

EA Forum's interest in cause-areas over time and other statistics

Great post!

Couple of nitpicks: in the coloured charts some of the colours (eg global poverty/moral philosophy) are reeeally hard to tell apart.

I would also like to see absolute numbers of posts on eg the popularity posts, since high votes for eg 'career choice' could be explained by those posts being disproportionately likely to be important announcements from 80k or similar where what's really getting upvoted is often 80k's work rather than the words on the screen. And high stats for criticism could be (though I suspect isn't) explained by much fewer critical posts leading to greater extremes.

EA coworking/lounge space on gather.town

Please do!

And I haven't yet - the most users online so far was 6, and the free plan allows up to 25 (unless by concurrent users they mean 'members'?), but I'm very happy to do so if it gets anywhere near becoming a limiting factor!

ETA It's substantially more expensive than I thought to do this, so I wouldn't be able to self-fund it, but if we hit the point where we repeatedly need space for 25+ users I'd expect we could get funding from a community group. Or in the worst case scenario we can set up an adjacent space with a 2-way portal between the two.

1Emrik2mo
If cost is a problem, I could definitely contribute with up to 200 $/month. But I expect if we get 25 concurrent users, I'm not the only one interested in funding the project. Having an online EA hub like that could be extremely valuable.
Software Developers: Should you apply to work at CEA?

I've currently got a request in with LTFF so I could end up doing something totally unrelated to software development.

If that doesn't come through I would look at this again, though for the reasons I wrote about in the agencies sequence would want to learn more before rushing into it.

Also for the reasons I wrote about in that sequence I think it's probably better in the abstract for EA developers to work for a dedicated agency like Markus' if that becomes an option, though he's only in the early stages of proving the concept at the moment, so won't be hiring for a few months at least.

Load More