I feel this. It would be cool if you could drop a post and put a zoom link at the bottom to discuss it in like 24 or 48 hours, that way there can still be a discussion but maybe skirts around some of this obsessive forum checking ego stuff
Re the "EAs should not should" debate about whether we can use the word "should" which pops up occasionally, most recently on the "university groups need fixing".
My take is that you can use "should/ought" as long as your target audience has sufficiently grappled with meta-ethics and both parties are clear about what ethical system you are using.
"Should" (to an anti-realist) is shorthand for (the best action under X moral framework). I don't mind it being used in this context (though I agree with ozzies previous shortform on this that it seems u...
I don't know if they're doing the ideal thing here, but they are doing way better than I imagined from your comment.
Yep after walking through it in my head plus re- reading the post, doesn't seem egregious to me.
I think you might have replied on the wrong subthread but a few things.
This is the post I was referring to. At the time of extension, they claim they had ~3k applicants. They also infer that they had way fewer (in quantity or quality) applicants for the fish welfare and tobacco taxation projects but I'm not sure exactly how to interpret their claim.
...Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of th
That post says opens with
If we don't find more potential founders we may not be able to launch charities in Tobacco Taxation and Fish Welfare
This is apparently a pattern
In recent years we have had more charity ideas than we have been able to find founders for.
Seems pretty plausible they value a marginal new charity at $100k, or even $1m, given the amount of staff time and seed funding that go into each participant.
I also suspect they're more limited by applicant quality than number of spaces.
That post further says
...it is true that we get a lot of appli
Hi Peter thanks for the response - I am/was disappointed in myself also.
I assumed RP had thought about this. and I hear what you are saying about the trade-off. I don't have kids or anything like that and I can't really relate to struggling to sit down for a few hours straight but I totally believe this is an issue for some applicants and I respect that.
What I am more familiar with is doing school during COVID. My experience left me with a strong impression that even relatively high-integrity people will cheat in this version of the prisoner's ...
Two (barely) related thoughts that I’ve wanted to bring up. Sorry if it’s super off topic.
Rethink priorities application for a role I applied for two years ago told applicants it was timed application and not to take over two hours. However there was no actual verification of this; it was simply a Google form. The first round I “cheated” and took about 4 hours. I made it to the second round. I felt really guilty about this so made sure not to go over on the second round. I didn’t finish all the questions and did not get to the next round. I was left with t...
Hi Charlie,
Peter Wildeford from Rethink Priorities here. I think about this sort of thing a lot. I'm disappointed in your cheating but appreciate your honesty and feedback.
We've considered many times about using a time verification system and even tried it once. But it was a pretty stressful experience for applicants since the timer then required the entire task to be done in one sitting. The system we used also introduced some logistical difficulty on our end.
We'd like to try to make things as easy for our applicants as possible since it's already such a ...
Would be interesting to compare my likes on the ea forum with other people. I feel like what I up/downvote is way more honest than what I comment. If I could compare with someone the posts/comments where we had opposite reactions, i.e. they upvoted and I downvoted I feel like it could start some honest and interesting discussions.
Fantastic post/series. The vocab words have been especially useful to me. few mostly disjunctive thoughts even though I overall agree.
3. If humans become grabby, their values are unlikely to differ significantly from the values of the civilization that would've controlled it instead.
I think this is phrased incorrectly. I think the correct phrasing is :
3. If humans become grabby, their values (in expectation) are ~ the mean values of a grabby civilization.
Not sure if it's what you meant but let me explain the difference with an example. let's say there are three societies:
[humans | zerg | Protoss]
for simplicity let's say the winner takes all of the lightcone.
Moreover, even in the face of strong selection pressure, systems don't seem to converge on similar equilibria in general.
I like this thought but to push back a bit - nearly every species we know of is incredibly selfish or at best only cares about their very close relatives. Sure, crabs are way different than lions but OP is describing a much lower dimension, which seems more likely to generalize regardless of context.
If you asked me to predict what (animal) species live in the rainforest just by showing me a picture of the rainforest I wouldn't have a cha...
I stopped being vegetarian almost 2 years ago - one of the biggest reasons I'm not a vegetarian is that I stay up late pretty much every day and don't always feel like cooking or eating snacks so I will go to whatever is open near me. During university, nothing really stayed open after 10 anyway because Evanston is a lame place. So I would often eat at or before 10, and if I was eating out there were vegetarian options (stir fry with tofu, chipotle, etc.) still at this time.
Now I live in a predominantly eastern European and Mexican area of Chicago. There i...
Assume there are two societies that passed the great filter and are now grabby. Society EA and society NOEA.
Society EA you could say is quite similar to our own society. The majority of the dominant species is not concerned with passing the great filter and most individuals are inadvertently increasing the chance of the species extinction. However, a small contingent had become utilitarian rationalists and speced heavily into reducing x-risk. Since the group passed the great filter, you can assume this is in large part due to this contingent of EAs/g...
I think this would be good. one thing is that In many situations If you can write p(sucess) in a meaningful way then you should consider running a competition instead of grantmaking. Not going to work in every situation but I find this the most fair and transparent when possible.
I definitely have very little idea what I’m talking about but I guess part of my confusion is inner alignment seems like a capability of ai? Apologies if I’m just confused.
I don't remember specifics but he was looking if you could make certain claims on models acting a certain way on data not in the training data based on the shape and characteristics about the training data. I know that's vague sorry, I'll try to ask him and get a better summary.
It seems plausible that there are ≥100,000 researchers working on ML/AI in total. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers.
Barely anyone is going for the throat of solving the core difficulties of scalable alignment. Many of the people who are working on alignment are doing blue-sky theory, pretty disconnected from actual ML models.
One question I'm always left with is: what is the boundary between being an AGI safety researcher and a capabilities researcher?
For instance, My friend is getting his PhD in machine le...
So I can choose then?
Yes. but I think to be very specific, we should call the problems A and B (for instance, the quiz is problem A and the exam is problem B), and a choice to work on problem A equates to spending your resource [1]on problem A in a certain time frame. We can represent this as where {i} is the period in which we chose a and {j} is the number of times we have picked a before. j is sorta irrelevant for problem A since we only can use one resource max to study but relevant for problem B...
What do you mean by
the rate at which they will grow or shrink over time.
specifically what mathematical quantity is "they"
I don't full comprehend why we can't include it. It seems like the ITN framework does not describe the future of the marginal utility per resource spent on the problem but rather the MU/resource right now. If we want to generalize the ITN framework across time, which theoretically we need to do to choose a sequence of decisions, we need to incorporate the fact that tractability and scale are functions of time (and even further the previous decisions we make).
all this is going to do is change the resulting answer from (MU/$) to MU/$(t), where t is time. everything still cancels out the same as before. In practice I don't know if this is actually useful.
The more I think about this the more confused I get... Going to formalize and answer your questions but it might not be done till tomorrow.
Ok so I'm trying to come up with an example where
I think an example that perfectly reflects this is hard to come up with, but there are many things that are close.
was this meant to be a response to my comment? I can't tell. If so I'll try to come up with some examples
I agree with your assessment that Vasco's comment is not really on topic.
I also feel like there is a lack of substantive discussion and just overall engagement on the forum (this post and comment section being an exception).
I'm not exactly sure why this is (maybe there just aren't enough EA's) but it seems related to users being worried that their comments might not add value combined with the lack of anonymity and in-group dynamics. In general I find hacker news and subreddits like r/neoliberal to be significantly more thought provoking and en...
You can only press one button per year due to time/resource/ etc constraints. Moreover you can only press each button once.
No I wasn’t
FYI I edited the comment slightly, but it doesn’t change anything. Can you explain how the urgency of the button presses relates to the scale?
Let’s say I can press button a, which will create 1 utility or button b which will create 2 utility.
Button a is only press-able for the next year while button b is press-able for the next two years.
In this example I believe the scale has nothing to do with the urgency.
I would say we are basically on the exact same page in terms of the overall vision. I'm also trying to get at these logical chains of information that we can travel backwards through to easily sanity check and also do data analysis.
Where I think we break is if there is no underlying structure to these logical chains outside of a bunch of arrows pointing between links, it reduces our ability to automate and take away insights.
A few examples
Few Things
Theoretical idea that could be implemented into Metaculus
tldr; add an option to submit models of how to forecast a question, and also voting on the models.
To be more concrete, when someone submits a question, in addition to forecasting the question, you can submit a squiggle -- or just plain mathematical model -- of your best current guess of how to approach the problem. You define each subcomponent that is important in the final forecast and also how these subcomponents combine into the final forecast. Each subcomponent automatically becomes another...
I love this!
Sort of an aside but it would be really lovely if we could build a database with every prediction on Metaculus and also tons of estimates from other sources (academia, etc) so that people could do squiggle-type estimations that source all the constants.
Depending on what you mean by mistake I don’t think those implications are absurd at all.
The agricultural revolution wasn’t a decision humanity made, it’s game theory. More resources, more babies, and your ideas survive.
I’m not even saying that modernization was a mistake, which btw we could be less happy and i would still not necessarily say it was a mistake (again, depending on what you mean by mistake). It’s just that I think you are anthropromorphizing cultural natural selection as a well thought out decision with the intention of maximizing current utility.
Are you claiming that this would be a neccasary thing to prove or this is true but op didn’t include it. I would certainly consider becoming a hunter gatherer if I didn’t have an established life and friends and family
I’m confused why you and everyone else in this thread are so quick to dismiss the idea that hunter gathers have more happiness/ life satisfaction/well being.
This is not at all obvious to me.
When you say transaction costs, I assume you are referring to more than just money - but it’s confusing to me if this is actually cheaper (monetarily) in the short run assuming that the donors don’t just dole out money based on what felt right (or do output based finance rather than outcome). Like they still need to pay evaluators to decide payouts and potentially they have opened themselves up to more criticism or even legal disputes if they had established clear guidelines. I agree the process as a whole is a lot smoother though
He was the northwestern ea staff/professor sponsor for the duration of my time there (this doesn’t mean that much though).
I think what nuno is saying is true to an extent, more people would do argument mapping if they knew about it. I think another reason is that a lot of people are uncomfortable from a technical standpoint engaging with math/logic/proofs, so there is inherently more demand for prose because pretty much everyone who would engage would logic could also engage with prose but not the reverse.
It’s sorta like research papers vs the articles summarizing them. Usually an article that summarizes the paper in a low fidelity way has more demand (even ignoring the fact...
I'm not sure how I even feel about the price tag mattering considering it is an investment we can sell later but very quick research shows that there is a 13,000 square foot hotel (12 rooms) in the heart of Chicago for 300,000 a room. So conservatively we could guess that a similar building in downtown chicago would go for about 9 mil. And that is in pretty much the most expensive area in the city - If we are willing to go within an hour of the city center I think you could get something of comparable quality for ~5 million or maybe even less.
N...
You might want to check out https://forum.effectivealtruism.org/s/AbrRsXM2PrCrPShuZ
pretty much agree that it doesn't seem optimal to have people trying to drum up hype with a blog post when they think there is an opportunity for high impact. It would be nice to have a site that has thousands of very modular forecasts/ impact estimates on things that you can paste together so that people can see the numbers clearly and quickly.
I think this is sorta trying to do that on a less ambitious level.
Yea I agree that is the main crux of our disagreement. I guess a lot of it comes down to what it means for someone to have (de facto) control. Ultimately we are just setting some arbitrary threshold for what control means. I don't think it matters that much to iron out if certain people have "control" or not, but it would probably be useful to think about it in more numerical terms in relation to some sort of median EA.
Some metrics to use
To be clear I wasn't necessarily advocating for political organization or centralization, but I disagree that the lack of centralization is an excuse for the thought leaders when they could create centralization If they wanted. It basically serves as a get-out-of-jail-free card for anything they do, since they have de facto control but can always lean back on not having official leadership positions. For the most part the other comments better explain what I meant.
If I want EA to become less decentralized and have some sort of internal political system, what can I do?
I have 0 power or status or ability to influence people outside of persuasive argumentation. On the other hand, McCaskill and Co have a huge ability to do so.
The idea that we can't blame the high-status people in this community because they aren't de jure leaders when it's incredibly likely they are the only people who could facilitate a system in which there are de jure leaders seems misguided. I'm not especially interested in assigning blame but...
EA is not a monolith. There is no book that has the moral framework of EA written in stone. Some people in this community most certainly are utilitarians, others aren't.
If you want to argue what a decentralized movement is, you need to define who gets included, and then a system for weighting each agent's values as a part of the whole.
For instance, we might say, EA is composed of any agent who has attended an EAG. Then we might specify that what EA is "based in" is the weighted sum of each agent's values, where the weighting system is how many resources an agent controls.
Probably many people know these and also I wouldn’t say any of them are extremely aligned but since there are no comments.
The various arpa orgs
Congressional budget office
Institute for progress
Market shaping accelerator
Ethical humanist society