The charity section famously has lower salaries because the work is more intrinsically rewarding than regular corporate fare.
I thought it was because there's no profit to be made doing the work.
Nah, I am regularly wildly un-careful in my speech, so moving to Signal is a major benefit precisely for me.
Agree on UI though, the first time ppl text me I don't know who they are, and no photos for most of my contacts.
Happy to get behind this, I am always down to move to Signal. You can reach me there at five one oh, nine nine eight, four seven seven one (also a +1 at the front for US country code). (Please identify yourself when you text me.)
Pretty sure non-zero people have tried, my guess is the question is "how competent of an attacker and how much effort do they put into it".
Heeheehee. Sounds like Anders poking fun at his friend live.
It's nice to see this again <3
I asked Parfit to give this talk at that EAGxOxford, a conference Jacob Lagerros and I were the lead organizers of [edit: I see James Aung posted this, who was on the team too!]. It was one of the last talks of his life. I remember writing him an email about what talk to give, and he wrote a very long word document back as an attachment. He was a very careful thinker.
Also I remember a pretty endearing interaction between him and Anders Sandberg, where Anders pretended to be a fan and got Parfit to sign a copy of his book. (It was a joke because Anders and Parfit were former roommates and good friends.)
I think chapter 4, The Kinetics of an Intelligence Explosion, has a lot of terms and arguments from EY's posts in the FOOM Debate. (I've been surprised by this in the past, thinking Bostrom invented the terms, then finding things like resource overhangs getting explicitly defined in the FOOM Debate.)
Yeah, well, I haven't thought about this case much, so maybe there's some good counterargument, but I think of personal attacks as "this person's hair looks ugly" or "this person isn't fun at parties", not "this person is not strong in an area of the job that I think is key". Professional criticism seems quite different from personal attacks, and I hold different norms around how appropriate it is to bring up in public contexts.
Sure, it's a challenge to someone to be professionally criticized, and can easily be unpleasant, but it's not irrelevant or off-topic and can easily be quite valuable and important.
Hi, can you give an example of a speculative personal attack in the post that you're referring to?
Feedback that the following page had like 1-2 letters width of horizontal scroll when I loaded on iPad.
Added: this page too:
Habryka left a lot of the relevant comments. My main positive is the separation of blogposts and research reports, I think that is likely pretty helpful when looking just for the high-effort research. My main negative was the information density decrease on the grants page, a page for a few years of my life I used to check regularly. Comparing on iPad right now with the way back machine, I used to see 8 grants on a page, but now I only see 2, so a 4x reduction.
Took me a while to find where you got your 2x+y from, I see it's visible if you highlight the cells in the sheet.
Here's a sheet with the score as sorted by the top 1k people, which is what I was interested in seeing: https://docs.google.com/spreadsheets/d/1VODS3-NrlBTnSMbGibhT4M2FpmfT-ojaPTEuuFIk9xc/edit?usp=sharing
Feedback: I tried and failed on my phone to read the voting results by the ranking of how people voted. I don’t know what weighting is used in the spreadsheet so the ordering feels monkeyed-with.
(Someone told me this comment read as hostile to them; FYI I thought it was a funny series of thoughts that I had, no hostility meant at all!)
I saw this title and assumed someone was making a public criticism of CEA.
Then I saw it was written by a present CEA staff member.
And I thought "Wow, creative way to get changes made at your organization." :D
If I were Thomas Kwa right now I would be offering Eneasz $10,000 for 5% of his impact certificate for making the HPMOR podcast.
Ah, this is the true meta trap for EAs.
Woop, thank you for true but contrary datapoints.
I had three on my first day and then was emotionally done. I remember thinking "to all other people, I can either cry with joy at what you say, or cry in frustration, but no other responses are available right now".
It involved (for me) gearing up a ton of context and interest in one person, finding something critical to say with them, and then they were gone and it was happening again.
I mean, maybe we were all just being dumb and should handle it better. I also wonder if there's some natural way for event organizers to be like "there are set break periods where we stop 1-1s from being booked" or something, though probably that's a bad solution and there's a better one.
I'll just say from the other side that at EAG x Oxford I had a lot of 1-1s and didn't find it stressful; I'm really extroverted and get a lot of energy from things like this. I don't never need a break or want to escape, but the burnout thing is less common for me.
After the mixup with CLR and CLTR, I can't believe there are also now two CHAI's that will sometimes be discussed on the EA Forum.
At least these ones involve very different cause areas, so should be obvious from context (as contrasted with two organisations that work on long-term risk where AI risk is a focus).Also, have some pity for the Partnership on AI and the Global Partnership on AI.
Yeah, pretty reasonable.
Well, you don’t have to be any more, because now it’s Jessica McCurdy’s reply.
To be clear I think this instance is a fairly okay request to make as a post title, but I don’t want the reasoning to imply anyone can do this for whatever reason they like.
Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.
Forgive the clickbait title, but EA is as prone to clickbait as anywhere else.
I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.
Candidly, I'm a bit dismayed that the top voted comment on this post is about clickbait.
I remember hearing that the money was just for the person and I felt alarmed, thinking that so many random people in my year at school would've worked their asses off to get $50k — it's more than my household earned in a year.
Sydney told me scholarships like this are much more common in the US, then I updated that it's only to be paid against college fees which is way more reasonable. But I guess this is kind of ambiguous still? Does seem like it's two radically different products.
Thanks! The core thing I'm hearing you say is that the scholarships are the sort of thing you wouldn't fund on a cost-effectiveness metric and 80k is, but that on a time-effectiveness metric that changes it so that the scholarships are now competitive.
No, that's not what I'd say (and again, sorry that I'm finding it hard to communicate about this clearly). This isn't necessarily making a clear material difference in what we're willing to fund in many cases (though it could in some), it's more about what metrics we hold ourselves to and how that leads us to prioritize.
I think we'd fund at least many of the scholarships from a pure cost-effectiveness perspective. We think they meet the bar of beating the last dollar, despite being on average less cost-effective than 80k advising, because 80k advisi... (read more)
I didn't quite parse this paragraph:
For example, when we fund e.g. 80,000 Hours, we (amongst other activities) support their full-time advisors to advise interested people about how to have more impactful careers. With our scholarship programs, we’re also trying to cause people to spend more time on more impactful activities. But rather than do this via the 80k advisors, our scholarship programs use money “directly” (without much intermediating EA labor) to try to make impactful careers more accessible and attractive. In general, we think we get
Hm yeah, I can see how this was confusing, sorry!
I actually wasn't trying to stake out a position about the relative value of 80k vs. our time. I was saying that with 80k advising, the basic inputs per career shift are a moderate amount of funding from us and a little bit of our time and a lot of 80k advisor time, while with scholarships, the inputs per career shift are a lot of funding and a moderate amount of our time, and no 80k time. So the scholarship model is, according to me, more expensive in dollars per career shift, but less time-consuming of ded... (read more)
My personal reading of the post is that they think the scholarship decisions don't take up a lot of time, relative to 80k advisory stuff.
This is excellent branding.
Beaten to the punch by a big established player! Grr, I'll not forget this one Waterstones. Someday I'll have my own publishing company and you'll rue the day you bought Blackwell's from under me...
Your very own Swiss-army coal-mine! It can also be used as a hidden lair for secret planning, a well-heated winter home, and if you make a couple of changes to your strength training, a place to turn your personal exercise/workouts into valuable coal that you can sell for money.
Wait — what use do you have in mind for a coal-mine?
You can reduce carbon emissions but ceasing mining, in a nuclear war you could hide in it, and in a post-apocalyptic world it would provide a good source of energy.
Added a note just below the epistemic status.
Re hours: maybe? Personally I only imagine that being true for someone who's worked in this sort of retail before. If you haven't, and expect do a good job, then I reckon you'll be scrambling to get oriented and execute for at least several months if not the first year. Especially so if it's a business in decline and you're working to pull it out.
+1. SSC argued that there was not enough money in politics, and I wonder to what extent the same argument applies to academic publishers. How much would it cost to buy top journals in every field? How much would it take to by Nature, or Science?
SSC argued that there was not enough money in politics
To be clear, SSC argued that there was surprisingly little money in politics. The article explicitly says "I don’t want more money in politics".
Yeah, this is the most likely reason to not ahead. Someone else suggested Blackwell's would have signed some legal agreement to not publish further, which would be a pretty severe obstacle.
I'm interested to understand why the publishing house is valued 10x the bookstore, I don't know why the book-publishers would make 10x-50x what the book-sellers do.
Both forms say "This form can only be viewed by users in the owner's organisation."
We've discussed the consultancies a fair bit in the team, I'd love to have consultants at the Bay Area Lightcone Office who can do high quality lit reviews or help make websites or whatever else there's demand for amongst the members.
I've not read the other post, sounds interesting.
Something I imagined while reading this was being part of a strangely massive (~1000 person) extended family whose goal was to increase the net wealth of the family. I think it would be natural to join one of the family businesses, it would be natural to make your own startup, and also it would be somewhat natural to provide services for the family that aren't directly about making the money yourself. Helping make connections, find housing, etc.
Yeah, I think you understand me better now.
And btw, I think if there are particular grants that seem not in scope from a fund, is seems totally reasonable to ask them for their reasoning and update pos/neg on them if the reasoning does/doesn't check out. And it's also generally good to question the reasoning of a grant that doesn't make sense to you.
Though it still does seem to me like those two grants are probably better fits for LTFF.
But this line is what I am disagreeing with. I'm saying there's a binary of "within scope" or not, and then otherwise it's up to the fund to fund what they think is best according to their judgment about EA Infrastructure or the Long-Term Future or whatever. Do you think that the EAIF should be able to tell the LTFF to fund a project because the EAIF thinks it's worthwhile for EA Infrastructure, instead of using the EAIF's money? Alternatively, if the EAIF thinks someth... (read more)
Yeah, that's a good point, that donors who don't look at the grants (or know the individuals on the team much) will be confused if they do things outside the purpose of the team (e.g. donations to GiveDirectly, or a random science grant that just sounds cool), that sounds right. But I guess all of these grants seem to me fairly within the purview of EA Infrastructure?
The one-line description of the fund says:
The Effective Altruism Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their
The inclusion of things on this list that might be better suited to other funds (e.g the LTFF) without an explanation of why they are being funded from the Infrastructure Fund makes me slightly less likely in future to give directly to the Infrastructure Fund and slightly more likely to just give to one of the bigger meta orgs you give to (like Rethink Priorities).
I think that different funders have different tastes, and if you endorse their tastes you should consider giving to them. I don't really see a case for splitting responsibilities... (read more)
I find this perspective (and its upvotes) pretty confusing, because:
Thanks for the thoughtful reply.
I do think I was overestimating how robust you're treating your numbers and premises, it seems like you're holding them all much more lightly than I think I'd been envisioning.
FWIW I am more interested in engaging with some of what you wrote in in your other comment than engaging on the specific probability you assign, for some of the reasons I wrote about here.
I think I have more I could say on the methodology, but alas, I'm pretty blocked up with other work atm. It'd be neat to spend more time reading the report and leave ... (read more)
Great answer, thanks.
I tried to look for writing like this. I think that people do multiple hypothesis testing, like Harry in chapter 86 of HPMOR. There Harry is trying to weigh some different hypotheses against each other to explain his observations. There isn't really a single train of conditional steps that constitutes the whole hypothesis.
My shoulder-Scott-Alexander is telling me (somewhat similar to my shoulder-Richard-Feynman) that there's a lot of ways to trick myself with numbers, and that I should only do very simple things with them. I looked through some of his post... (read more)
A few thoughts on this:
Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of som... (read more)