All of Jess_Riedel's Comments + Replies

I listed this example in my comment, it was incorrect by an order of magnitude, and it was a retrodiction.  "I didn't look up the data on Google beforehand" does not make it a prediction.

4
Lukas Finnveden
11mo
Yeah sorry, I didn't mean to say this directly contradicted anything you said. It just felt like a good reference that might be helpful to you or other people reading the thread. (In retrospect, I should have said that and/or linked it in response to the mention in your top-level comment instead.) (Also, personally, I do care about how much effort and selection is required to find good retrodictions like this, so in my book "I didn't look up the data on Google beforehand" is relevant info. But it would have been way more impressive if someone had been able to pull that off in 1890, and I agree this shouldn't be confused for that.) Re "it was incorrect by an order of magnitude": that seems fine to me. If we could get that sort of precision for predicting TAI, that would be awesome and outperform any other prediction method I know about.

I'm also a little surprised you think that modeling when we will have systems using similar compute as the human brain is very helpful for modeling when economic growth rates will change.  (Like, for sure someone should be doing it, but I'm surprised you're concentrating on it much.) As you note, the history of automation is one of smooth adoption. And, as I think Eliezer said (roughly), there don't seem to be many cases where new tech was predicted based on when some low-level metric would exceed the analogous metric in a biological system. The key t... (read more)

2
Matthew_Barnett
10mo
In this post, when I mentioned human brain FLOP, it was mainly used as a quick estimate of AGI inference costs. However, different methodologies produce similar results (generally within 2 OOMs). A standard formula to estimate compute costs is 6*N per forward pass, where N is the number of parameters. Currently the largest language models have are estimated to be between 100 billion to 1 trillion parameters, which would work out to being 6e11 to 6e12 FLOP/forward pass.  The chinchilla scaling law suggests that inference costs will grow at about half the rate of training compute costs. If we take the estimate of 10^32 training FLOP for TAI (in 2023 algorithms) that I gave in the post, which was itself partly based on the Direct Approach, then we'd expect inference costs to grow to something like 1e15-1e16 per forward pass, although I expect subsequent algorithmic progress will bring this figure down, depending on how much algorithmic progress translates into data efficiency vs. parameter efficiency. A remaining uncertainty here is how a single forward pass for a TAI model will compare to one second of inference for humans, although I'm inclined to think that they'll be fairly similar.
2
Lukas Finnveden
11mo
From Birds, Brains, Planes, and AI:

I like this post a lot but I will disobey Rapoport's rules and dive straight into criticism.

Historically, many AI researchers believed that creating general AI would be more about coming up with the right theories of intelligence, but over and over again, researchers eventually found that impressive results only came after the price of computing fell far enough that simple, "blind" techniques began working (Sutton 2019).

I think this is a poor way to describe a reasonable underlying point.  Heavier-than-air flying machines were pursued for centuries, b... (read more)

2
Matthew_Barnett
10mo
I agree. A better phrasing could have emphasized that, although both theory and compute are required, in practice, the compute part seems to be the crucial bottleneck. The 'theories' that drive deep learning are famously pretty shallow, and most progress seems to come from tinkering, scaling, and writing code to be more efficient. I'm not aware of any major algorithmic contribution that would not have been possible if it were not for some fundamental analysis from deep learning theory (though perhaps these happen all the time and I'm just not sufficiently familiar to know). I think the alternative theory of a common cause is somewhat plausible, but I don't see any particular reason to believe in it. If there were a common factor that caused progress in computer hardware and algorithms to proceed at a similar rate, why wouldn't other technologies that shared that cause grow at similar rates? Hardware progress has been incredibly fast over the last 70 years -- indeed, many people say that the speed of computers is by far the most salient difference between the world in 1973 and 2023. And yet algorithmic progress has apparently been similarly rapid, which seems hard to square with a theory of a general factor that causes innovation to proceed at similar rates. Surely there are such bottlenecks that slow down progress in both places, but the question is what explains the coincidence in rates. I expect innovation in AI in the future will take a different form than innovation in the past. When innovating in the past, people generally found a narrow tool or method that improved efficiency in one narrow domain, without being able to substitute for human labor across a wide variety of domains. Occasionally, people stumbled upon general purpose technologies that were unusually useful across a variety of situations, although by-and-large these technologies are quite narrow compared to human resourcefulness. By contrast, I think it's far more plausible that ML foundation mo
1
Jess_Riedel
11mo
I'm also a little surprised you think that modeling when we will have systems using similar compute as the human brain is very helpful for modeling when economic growth rates will change.  (Like, for sure someone should be doing it, but I'm surprised you're concentrating on it much.) As you note, the history of automation is one of smooth adoption. And, as I think Eliezer said (roughly), there don't seem to be many cases where new tech was predicted based on when some low-level metric would exceed the analogous metric in a biological system. The key threshold for recursive feedback loops (*especially* compute-driven ones) is how well they perform on the relevant tasks, not all tasks. And the way in which machines perform tasks usually looks very different than how biological systems do it (bird vs. airplanes, etc.). If you think that compute is the key bottleneck/driver, then I would expect you to be strongly interested in what the automation of the semiconductor industry would look like.

Thanks: https://ibkr.com/referral/charles6837

I agree it's important to keep the weaker fraud protection on debit cards in mind.  However, for the use I mentioned above, you can just lock the debit card and only unlock it when you have a cash flow problem.  (Btw, if you don't use your IB debit card, you should lock it even if you aren't using it.) Debit card liability is capped at $50 and $500 if you report fraudulent transactions within 2 days and 60 days, respectively.

 

That said, I have most of my net worth elsewhere, so I'm less worried about tail risks than you would reasonably be if you're mostly invested through IB.

0
MichaelDickens
1y
That's good, I didn't know that!

If you have non-qualified investments and just keep money in a savings account in case of unexpected large expenses or interruptions to your income, it may be better to instead move the money in the savings account to Interactive Brokers and invest it.  Crucially, you can get a  debit card from Interactive Brokers that allows you to spend on margin (borrow) at a low rate (~5%, much less than credit cards) using your investments there as collateral.  That way you keep essentially all your money invested (presumably earning more than the savings account) while still having access to liquidity when you need it.

1
Steve Trambert
1y
I've been thinking of opening an account at Interactive Brokers.  If you want to share your referral link, I'd be happy to use it.
7
MichaelDickens
1y
I use Interactive Brokers, but I don't use their debit card because I expect their fraud protections are not as good as a credit card, and I don't want to expose ~all my net worth to an easy fraud vector. I use a checking account and keep enough money for ~2 months of expenses, and keep the rest in my IB account. I don't have a savings account.
2
Pat Myron
1y
Agree moving more savings to investments has higher expected value Different people have different volatility/risk tolerances and experience investing. Wanted to offer a couple quick large tractable gains without any tradeoffs or behavior changes, and better savings accounts can still be combined with investing more

Just to be clear: we mostly don’t argue for the desirability or likelihood of lock-in, just its technological feasibility. Am I correctly interpreting your comment to be cautionary, questioning the desirability of lock-in given the apparent difficulty of doing so while maintaining sufficiently flexibility to handle unforeseen philosophical arguments?

6
Wei Dai
1y
To take a step back, I'm not sure it makes sense to talk about "technological feasibility" of lock-in, as opposed to say its expected cost, because suppose the only feasible method of lock-in causes you to lose 99% of the potential value of the universe, that seems like a more important piece of information than "it's technologically feasible". (On second thought, maybe I'm being unfair in this criticism, because feasibility of lock-in is already pretty clear to me, at least if one is willing to assume extreme costs, so I'm more interested in the question of "but can it be done at more acceptable costs", but perhaps this isn't true of others.) That aside, I guess I'm trying to understand what you're envisioning when you say "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." What kind of mechanism do you have in mind for doing this? Also, you distinguish between stopping philosophical progress vs stopping technological progress, but since technological progress often requires solving philosophical questions (e.g., related to how to safely use the new technology), do you really see much distinction between the two?

If the Federal government is just buying, on the open market, an amount of coal comparable to how much would have been sold without government action, then it's going to drive up the price of coal and increase the total amount of coal extracted.  How much extra coal gets extracted depends on the supply and demand curves, and the amount of coal actually burned will almost certainly be less than in the world where the government didn't act, but it does mean the environmental benefits of this plan will be significantly muted.

Paul Graham writes that Noora Health is doing something like this.

https://twitter.com/Jess_Riedel/status/1389599895502278659

https://opensea.io/assets/0x495f947276749ce646f68ac8c248420045cb7b5e/96773753706640817147890456629920587151705670001482122310561805592519359070209

Regarding your 4 criteria, I think they don't really delineate how to make the sort of judgment calls we're discussing here, so it really seems like it should be about a 5th criterion that does delineate that.

Sorry I was unclear.  Those were just 4 desiderata that the criteria need to satisfy; the desiderata weren't intended to fully specify the criteria.

If a small group of researchers at MIRI were trying to do work on verification but not getting much traction in the academic community, my intuition is that their papers would reliably meet your crite

... (read more)

Sure, sure, we tried doing both of these. But they were just taking way too long in terms of new papers surfaced per hour worked. (Hence me asking for things that are more efficient than looking at reference lists from review articles and emailing the orgs.) Following the correct (promising) citation trail also relies more heavily on technical expertise, which neither Angelica nor I have.

I would love to have some collaborators with expertise in the field to assist on the next version. As mentioned, I think it would make a good side project for a grad student, so feel to nudge yours to contact us!

for instance if you think Wong and Cohen should be dropped then about half of the DeepMind papers should be too since they're on almost identical topics and some are even follow-ups to the Wong paper).

Yea, I'm saying I would drop most of those too.

I think focusing on motivation rather than results can also lead to problems, and perhaps contributes to organization bias (by relying on branding to asses motivation).

I agree this can contribute to organizational bias.

I do agree that counterfactual impact is a good metric, i.e. you should be less excite

... (read more)
0
jsteinhardt
3y
Thanks, that's helpful. If you're saying that the stricter criterion would also apply to DM/CHAI/etc. papers then I'm not as worried about bias against younger researchers. Regarding your 4 criteria, I think they don't really delineate how to make the sort of judgment calls we're discussing here, so it really seems like it should be about a 5th criterion that does delineate that. I'm not sure yet how to formulate one that is time-efficient, so I'm going to bracket that for now (recognizing that might be less useful for you), since I think we actually disagree about in principle what papers are building towards TAI safety. To elaborate, let's take verification as an example (since it's relevant to the Wong & Kolter paper). Lots of people think verification is helpful for TAI safety--MIRI has talked about it in the past, and very long-termist people like Paul Christiano are excited about it as a current direction afaik. If a small group of researchers at MIRI were trying to do work on verification but not getting much traction in the academic community, my intuition is that their papers would reliably meet your criteria. Now the reality is that verification does have lots of traction in the academic community, but why is that? It's because Wong & Kolter and Raghunathan et al. wrote two early papers that provided promising paths forward on neural net verification, which many other people are now trying to expand on. This seems strictly better to me than the MIRI example, so it seems like either:  -The hypothetical MIRI work shouldn't have made the cut  -There's actually two types of verification work (call them VerA and VerB), such that hypothetical MIRI was working on VerA that was relevant, while the above papers are VerB which is not relevant. -Papers should make the cut on factors other than actual impact, e.g. perhaps the MIRI papers should be included because they're from MIRI, or you should want to highlight them more because they didn't get traction. -Som

Thanks Jacob.  That last link is broken for me, but I think you mean this?

 You sort of acknowledge this already, but one bias in this list is that it's very tilted towards large organizations like DeepMind, CHAI, etc.

Well,  it's biased toward safety organizations, not large organizations.  (Indeed, it seems to be biased toward small safety organizations over larges ones since they tend to reply to our emails!)  We get good coverage of small orgs like Ought, but you're right we don't have a way to easily track individual unaffiliate... (read more)

7
jsteinhardt
3y
Yeah, good point. I agree it's more about organizations (although I do think that DeepMind is benefiting a lot here, e.g. you're including a fairly comprehensive list of their adversarial robustness work while explicitly ignoring that work at large--it's not super-clear on what grounds, for instance if you think Wong and Cohen should be dropped then about half of the DeepMind papers should be too since they're on almost identical topics and some are even follow-ups to the Wong paper). That seems wrong to me, but maybe that's a longer conversation. (I agree that similar papers would probably have come out within the next 3 years, but asking for that level of counterfactual irreplacibility seems kind of unreasonable imo.) I also think that the majority of the CHAI and DeepMind papers included wouldn't pass that test (tbc I think they're great papers! I just don't really see what basis you're using to separate them). I think focusing on motivation rather than results can also lead to problems, and perhaps contributes to organization bias (by relying on branding to asses motivation). I do agree that counterfactual impact is a good metric, i.e. you should be less excited about a paper that was likely to soon happen anyways; maybe that's what you're saying? But that doesn't have much to do with motivation. Also let me be clear that I'm very glad this database exists, and please interpret this as constructive feedback rather than a complaint.

Jaime gave a great thorough explanation. My catch-phrase version: This is not a holistic Bayesian prediction. The confidence intervals come from bootstrapping (re-sampling) a fixed dataset, not summing over all possible future trajectories for reality.

I was curious about the origins of this concept in the EA community since I think it's correct, insightful, and I personally had first noticed it in conversation among people at Open Phil. On Twitter, @alter_ego_42 pointed out the existence of the Credal Resilience page in the "EA concepts" section of this website. That page cites

Skyrms, Brian. 1977. Resiliency, propensities, and causal necessity. The journal of philosophy 74(11): 704-713. [PDF]

which is the earliest thorough academic reference to this idea that I know of. With apologies t... (read more)

5
MichaelA
4y
(In a similar spirit of posting things somewhat related to this general topic while apologising to Greg for doing so...) A few months ago, I collected on LessWrong a variety of terms I'd found for describing something like the “trustworthiness” of probabilities, along with quotes and commentary about those terms. Specifically, the terms included: * Epistemic credentials * Resilience (of credences) * Evidential weight (balance vs weight of evidence) * Probability distributions (and confidence intervals) * Precision, sharpness, vagueness * Haziness * Hyperpriors, credal sets, and other things I haven't really learned about It's possible that some readers of this post would find that collection interesting/useful.

Were there a lot of new unknown or underappreciated facts in this book? From the summary, it sounds mostly like a reinterpretation of the standard history, which hinges on questions of historical determinism.

Consider changing the visual format a bit to better distinguish this forum from LW. They are almost indistinguishable right now, especially once you scroll down just a bit and the logo disappears.

1
Aaron Gertler
5y
Thanks for the feedback, Jess! I'll make sure our tech team sees it.

Could you explain your first sentence? What risks are you talking about?

Also, how does one lottery up further if all the block sizes are $100k? Diving it up into multiple blocks doesn't really work.

1
SamDeere
6y
An alternative model for variable pot sizes is to have a much larger guarantor (or a pool of guarantors), and then run rolling lotteries. Rather than playing against the pool, you're just playing against the guarantor, and you could set the pot size you wanted to draw up to (e.g. your $1000 donation could give you a 10% shot at a $10k pot, or a 1% shot at a $100k pot). The pot size should probably be capped (say, at $150k), both for the reasons Paul/Carl outlined re diminishing returns, and to avoid pathological cases (e.g. a donor taking a $100 bet on a billion dollars etc). Because you don't have to coordinate with other donors, the lottery is always open, and you could draw the lottery as soon as your payment cleared. Rather than getting the guarantor to allocate a losing donation, you could also 'reinvest' the donations into the overall lottery pool, so eventually the system is self-sustaining and doesn't require a third-party guarantor. [update: this model may not be legally possible, so possibly such a scheme would require an ongoing guarantor] This is more administratively complex (if only because we can't batch the manual parts of the process to defined times), but there's a more automated version of this which could be cool to run. At this stage I want validate the process of running the simpler version, and then if it's something there's demand for (and we have enough guarantor funds to make it feasible) we can look into running the rolling version sometime next year.
0
Owen Cotton-Barratt
6y
A simple variation on the current system would allow people to opt into lottery-ing up further (to the scale of the total donor lottery pot): Ask people what scale they would like to lottery to. If $100k, allocate them a range of tickets in one block as in the current system. If (say) $300k, split their tickets between three blocks, giving them the same range in each block. If their preferred scale exceeds the total pot, just give them correlated tickets on all available blocks. If there's a conflict of preference between people wanting small and large lotteries so they're not simultaneously satisfiable (I think this is somewhat unlikely in practice unless someone comes in with $90k hoping to lottery up to $100k), first satisfy those who want smaller totals, then divide the rest as fairly as possible.
0
Paul_Christiano
6y
You have diminishing returns to money, i.e. your utility vs. money curve is curved down. So a gamble with mean 0 has some cost to you, approximately (curvature) * (variance), that I was referring to as the cost-via-risk. This cost is approximately linear in the variance, and hence quadratic in the block size.
0
CarlShulman
6y
Probably the risks of moving down the diminishing returns curve. E.g. if Good Ventures put its entire endowment into a donor lottery (e.g. run by BMGF) for a 1/5 chance of 5x endowment diminishing returns would mean that returns to charitable dollars would be substantially higher in the worlds where they lost than when they won. If they put 1% of their endowment into such a lottery this effect would be almost imperceptibly small but nonzero. Similar issues arise for the guarantor. With pots that are small relative to the overall field or the guarantor's budget (or the field of donors the guarantor considers good substitutes) these costs are tiny but for very big pots they become less negligible. Take your 100k and ask Paul (or CEA, to get in touch with another backstopping donor) for a personalized lottery. If very large it might involve some haircut for Paul. A donor with more resources could backstop a larger amount without haircut. If there is recurrent demand for this (probably after donor lotteries become more popular) then standardized arrangements for that would likely be set up (I would try to do so, at least).

I'm curious about why blocks were chosen rather than just a single-lottery scheme, i.e., having all donors contribute to the same lottery, with a $100k backstop but no upper limit. The justification on the webpage is

Multiple blocks ensure that there is no cap on the number of donors who may enter the lottery, while ensuring that the guarantor's liability is capped at the block size.

But of course we could satisfy this requirement with the single-lottery scheme. The single-lottery scheme also has the benefits that (1) the guarantor has significantly le... (read more)

6
Paul_Christiano
6y
A $200k lottery has about 4x as much cost-via-risk as a $100k lottery. Realistically I think that smaller sizes (with the option to lottery up further) are significantly better than bigger pots. As the pot gets bigger you need to do more and more thinking to verify that the risk isn't an issue. If you were OK with variable pot sizes, I think the thing to do would be: * The lottery will be divided up into blocks. * Each block will have have the same size, which will be something between $75k and $150k. * We provide a backstop only if the total donation is < $75k. Otherwise, we just divide the total up into chunks between $75k and $150k, aiming to be about $100k.
5
CarlShulman
6y
The point of a donor lottery is to help donors move to an efficient scale to research their donations or cut transaction costs. But there are important diminishing returns to donations if those donations are large relative to total funding for a cause or organization. So it is possible to have a pot that is inefficiently large, so that small donors risk not plucking low-hanging fruit. If the odds and payouts were determined by the unknown level of participation, then a surge of interest could result in an inefficiently large pot (worse, one that is set after people have entered). $100,000 is small enough relative to total EA giving, and most particular causes in EA, not to worry much about that, but large enough to support increased research while reducing the expected costs thereof. If a lottery winner, after some further consideration, wants to try to lottery up to a still larger scale they can request that. However, overly large pots cannot be retroactively shrunk after winning them. One of the most common mistakes people have on hearing about donor lotteries is worrying about donors with different priorities. So making it crystal clear that you don't affect the likelihood of payouts for donors to other causes (and thus the benefits of additional research and reduced transaction costs for others) is important.

EAs seems pretty open to the idea of being big-tent with respect to key normative differences (animals, future people, etc). But total indifference to cause area seems too lax. What if I just want to improve my local neighborhood or family? Or my country club? At some point, it becomes silly.

It might be worth considering parallels with the Catholic Church and the Jesuits. The broader church is "high level", but the requirements for membership are far from trivial.

2
Kaj_Sotala
7y
"Total indifference to cause area" isn't quite how I'd describe my proposal - after all, we would still be talking about high-level EA, a lot of people would still be focused on high-level EA and doing that, etc. The general recommendation would still be to go into high-impact causes if you had no strong preference.

The list of donation recipients from Nick's DAF is here: https://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/edit#gid=0

I don't believe there's been any write-ups or dollar amounts, except the above list is ordered by donation size.

I am on the whole positive about this idea. Obviously, specialization is good, and creating dedicated fund managers to make donation decisions can be very beneficial. And it makes sense that the boundaries between these funds arise from normative differences between donors, while putting fund managers in charge of sorting out empirical questions about efficiency. This is just the natural extension, of the original GiveWell concept, to account for normative differences, and also to utilize some of the extra trust that some EAs will have for other people ... (read more)

3
Kerry_Vaughan
7y
Part of the reason that CEA staff themselves are not fund managers is to help with this kind of conflict. I think that regardless of who we choose as fund managers, there is potential for recipients to develop personal connects with the fund managers and use that to their advantage. This seems true in almost any funding scheme were evaluating the people in charge is part of the selection process. Do you think EA Funds will make this worse somehow? We will definitely require some level of reporting from fund managers although we haven't yet determined how much and in what level of detail. As I mentioned in a different comment, I'd be interested in learning more about what people would like to see. Having Nick as a fund manager is a good test case since there's a conflict given that he's a CEA trustee. Our plan so far has been to make sure that we make the presence of this conflict well known. Do you think this is a good long term plan or would you prefer something else?

Update: the Good Judgment Project has just launched Good Judgement Open. https://www.gjopen.com/

I mostly agree with this. No need to reinvent the wheel, and armchair theorizing is so tempting, while sorting through the literature can be painful. But I will say your reason #1 (the typical sociological research is of very poor quality) leads to a second effect: scouring the literature for the useful bits (of which I am sure there is plenty) is very difficult and time consuming.

If we were talking about ending global poverty, we would not be postulating new models of economic development. Why should we demand any less empirical/academic rigor in the

... (read more)

I am mildly worried that connecting strangers to make honor-system donation trades could lead to a dispute. There are going to be more and more new faces around if the various EA growth strategies this year pan out. The fact that donation trading has been going on smoothly until now means folks might get overly relaxed, and it only takes one publicized dispute to really do damage to the culture. Even if no one is outright dishonest, miscommunication could lead to a someone thinking they have been wronged to the tune of thousands of dollars.

I don't think... (read more)

3
Jonas V
9y
From the discussion I gather that we're facing the following challenges: * Trust * Handling amounts that can't be traded * Maybe some technical challenges – once the number of trades, charities and countries increases, overview and coordination might become more difficult * Also, the charities will want to know who the actual donors are, and thank them These challenges could be resolved by a global network of EA organisations who offer donation trading (and, if possible in their legislation, donation regranting). Trust and professional communications and management seems easier to achieve with organisations who stick around longer-term than with individuals. At GBS Switzerland, we already have some of the technical and legal components needed for this (we're tax-deductible in several countries, can regrant donations, have a significant amount of donors who don't pay taxes, and have some nice spreadsheets). Making progress in this direction is not a top priority for us at the moment, but if you're interested in one of the things I've mentioned, please get in touch with me (and also Tom Ash, as he mentions below).
0
Tom_Ash
9y
Brian, would you (or someone you find) be happy to be the arbiter?
0
RyanCarey
9y
It seems good to have an arbiter...

Howdy, I'm trying to make a donation to CEA of about $4,000 this month from Canada. Would be very glad to swap with you for AMF. If you're still up for this, please shoot me an email.

Worth noting that it can still be worth posting to your personal blog if only to increase how many people see it.

Very reasonable. Thanks Ryan.

Hey, I'm a postdoc in q. info (although more on the decoherence and q. foundations side of things). I'm interested to know more about where you're at and how you found out about LessWrong. Shoot me an email if you want. My address is my username without the underscore @gmail.com .

I lean against creating multiple fora. Even if it was a good idea in the long run, I think that it's better to start with one forum so that it's easier to achieve a critical mass. It's no exaggeration to say that LW's Main/Discussion distinction was one of the most hated features of the site. I also think that fragmenting an online community and decreasing its usability are two of the most damaging things you can do to a budding community website.

This was interesting to me.

Here's one more idea to throw out there: Divide the posts into "major"... (read more)

0
Tom_Ash
10y
I like the major/minor idea, and tag filters generally. (Side note: I wasn't sure whether upvoting Jess' comment would be sufficient to express this.)
1
RyanCarey
10y
This is a fair suggestion. I guess my take on this is that most people care about reading good quality material a lot more than they care about length. For example, lots of Wei Dai's early posts on LessWrong were short but incisive, so they got upvoted. Even a checkbox with tags has disadvantages - Even if posts are categorised in stealth, if half of the posts are hidden a lot of the time, this complicates the experience of a new user. It's hard to get users to add tags and boring to have to tag things myself. This is all to benefit some fraction of users, maybe a quarter, who can then hide short posts. On balance, I lean towards simplicity. So if people have great links or questions, I think they should just post them to the front page. If they get 10+ karma and 10+ comments there, then it's an appropriate place for them.

Thanks for info Ryan. A couple of points:

(1) I don't think minor posts like "Here's an interesting article. Anyone have thoughts?" fit very well in the open thread. The open threads are kind of unruly, and it's hard to find anything in there. In particular, it's not clear when something new has been added.

One possibility is to create a second tier of posts which do not appear on the main page unless you specifically select it. Call it "minor posts" or "status updates" or whatever. (Didn't LessWrong have something like th... (read more)

2
RyanCarey
10y
I agree that the links might not fit well in an open thread. An alternative might be to bundle up a bunch of links into a "links for November" type thread like State Star Codex. Then, people can put more links in the comments if appropriate. However, learn against improving discussion by subdividing discussion fora. The main/discussion distinction was one of LessWrong's most unpopular features. In the effective altruism community, we already have a subreddit, many facebook groups, many personal blogs, many Twitters, many Tumblrs, LessWrong, here and many other online locations. Moreover, given limited programmer resources, we're not currently looking for new features. Having said that, I'll look into the feasibility highlighting new comments because that seems like it would be useful. A private Facebook group is best for this. There's no straightforward way to prevent public pages from being indexed by sites like archive.today.

I'm still fuzzy on the relationship between the EA Facebook group and the EA forum. Are we supposed to move most or all the discussion that was going on in the FB group here? Will the FB be shut down, and if not what will is be used for?

I think the format of the forum will present a higher barrier to low-key discussion than the FB group, e.g. I'd guess people are much less likely to post an EA related new article if they don't have too much to add to it. This is primarily because the forum looks like a blog. Is FB style posting encouraged?

If this has a... (read more)

1[anonymous]10y
I feel like a lot of potential is lost if we don't encourage asking questions and making smaller contributions (like on fb and the open thread) on the forum. I do understand that these kinds of posts don't fit into the main section of the forum. But what's the reasoning behind not having any subforums? I often think of issues I would post in subforums of this site, which I wouldn't bring up facebook (because 100s or 1000s will read it) and that doesn't fit into Main. An open thread is a nice step in the right direction. It does have significant disadvantages to subforum(s) though in my estimation: * No headlines for posts, so it's not scanable * You have to see the full post rather than the headline only * It's not that visible and the headline "open thread" doesn't really intrigue me as much as other posts. Also, I feel like topic-specific subforums would generally lower the barrier for people to post something. I guess I have this intuition because the posts won't be seen by (as many) people who are not interested in your post's topic. By now I've read Ryan's comment on subforums (https://www.facebook.com/groups/effective.altruists/permalink/743662675690092/?comment_id=744027525653607&offset=0&total_comments=14). In my estimation the lost potential outweighs the costs, so consider this a vote for subforums (or at least main/discussion). I'm happy to be convinced otherwise though.
4
RyanCarey
10y
Hey Jess. Good questions. Obviously, the relationship between these is mostly decided by the community, rather than by one individual, and will emerge gradually over some number of weeks. That said, I think it's good for most substantive discussion to move here. Here should also have some blog-length posts that are lighter and fun to read. Since most people are using the same names on Facebook as here, there are some advantages to keeping it open. It's a kind of bridge between internet and real world. It helps people to put faces to the names of people they're interacting with, which should increase willingness to meet or collaborate. As for what goes there, I think the stuff that goes there will include: * some links (e.g. Elon Musk made a bunch more dough of this nasa deal) * practical real-world stuff will go there, (e.g. "I'm going to X city, does anyone have a room to offer there") * specific topics (similar to the Open Threads. There should be enough minor EA discussion to go around) I'm kicking around a rough guidline in my head. Somethnig like "post it to the forum if it's at least three of 'fun to read', 'substantial', 'relevant' and 'reasoned'. If it's two of those things, then an open thread or facebook is more suitable. If it's only one of those things, then it's no good. Tom and I are thinking of ways to tie-in with the Hub. I think that the Hub could use the Forum to run a survey, whereas the Forum could use the Hub's map to identify people who might want to attend a meetup. Feedback helps, especially on the FB/Forum border. Anyway, I'll bundle these thoughts into my next update post.
9
Tom_Ash
10y
I've chatted to Ryan about this, and the idea is that the forum is the place for people's writings on and discussion of EA, whereas the projects on http://effectivealtruismhub.com/ are for other things. For instance the EA Profiles are the place for information about people - e.g. showing more about who the people writing here are, and (we plan) linking to those writings. So in that sense they should be nicely complementary.
6
Peter Wildeford
10y
I thought the EA Facebook group was going to play "LW Discussion" to the EA Forum's "LW Main". Though the open thread does blur that line. There's also an EA Reddit for posting articles.

Facebook is a terrible medium for discussion, so I hope everyone, or at least all the cool people, come over here and we have an active community. I don't know if this will happen. I think this forum would be a good place for links with discussion and not just blog posts.

My impression is more that FHI is at the startup stage and CSER is simply an idea people have been kicking around. Whether or not you support CSER would depend on whether or not you think it's actually going to be instantiated. Am I confused?

0
Niel_Bowerman
10y
I'm not sure of the exact numbers but my impression is that FHI has perhaps half a dozen full-time staff members, and CSER has one part-time person who is based in FHI and has been working on grant applications but I am unclear about the long-term financial viability of having this person working on applying for grants.

I think the claim, which I do not necessarily support, would be this: Many people give to multiple orgs as a way of selfishly benefiting themselves (by looking good and affiliating with many good causes), whereas a "good" EAer might spread their donation to multiple orgs as a way to (a) persuade the rest of the world to accomplish more good or (b) coordinate better with other EAs, a la the argument you link with Julia. (Whether or not there's a morally important distinction between the laymen and the EAer as things actually take place in the real world is a bit dubious. EA arguments might just be a way to show off how well you can abstractly justify your actions.)

> They needn't be strangers. This has already happened in the UK EA community amongst EAs who met through 80,000 Hours and supported each other financially in the early training and internship stages of their earning to give careers.

Agreed, but if the funds are effectively restricted to people you know and can sort of trust, then the public registry loses most of its use. Just let it be known among your trusted circle that you have money that you'd be willing share for EA activities. This has the added benefit of not putting you in the awkward position of having to turn down less-trusted folks who request money.

0
Niel_Bowerman
10y
Yes, unless you were able to meet with people and create time to develop the neccessary trust. Also, like any grant-making foundation, I wouldn't expect people in the registry to fund all or even most of the oppertunities that came along, though the registry would lose some of its value if it appears to be unlikely to give out donations to good projects.

Will, are you saying that this fund would basically just be a registry? (As opposed to an actual central collection of money with some sort of manager.)

Do you really think people would just send money to 1st-world strangers (ii) on the promise that the recipient was training to earn to give? I have similar misgivings about (iv).

0
Owen Cotton-Barratt
10y
I don't know about the appropriate legal hurdles, but if you wanted to scale this, you would set it up as a loan with a reasonable interest rate rather than a gift. That way the individual needs to trust the central body which is making the loan (that it will use the money raised for good ends), rather than the central body trusting the individual. This is a much lower bar to cross.
0
Niel_Bowerman
10y
In addition to Carl's comments on why the registery would be easier, it has the added benefit of people being able to control their own funds and thus being more willing to contribute to the 'fund'. "Do you really think people would just send money to 1st-world strangers (ii) on the promise that the recipient was training to earn to give?" They needn't be strangers. This has already happened in the UK EA community amongst EAs who met through 80,000 Hours and supported each other financially in the early training and internship stages of their earning to give careers.