New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Shortform

81
2d
8
Brief reflections on the Conjecture post and it's reception (Written from the non-technical primary author) * Reception was a lot more critical than I expected. As last time, many good points were raised that pointed out areas where we weren't clear * We shared it with reviewers (especially ones who we would expect to disagree with us) hoping to pre-empt these criticisms. The gave useful feedback. * However, what we didn't realize was that the people engaging with our post in the comments were quite different from our reviewers and didnt share the background knowledge that our reviewers did   * We included our end line views (based on feedback previously that we didn't do this enough) and I think it's those views that felt very strong to people.  * It's really, really hard to share the right level of detail and provide adequate context. I think this post managed to be both too short and too long. * Short: because we didn't make as many explicit comparisons benchmarking research * Long: we felt we needed to add context on several points that weren't obvious to low context people.  * When editing a post it's pretty challenging to figure out what assumptions you can assume and what your reader won't know, because there's a broad range of knowledge. I think nested thoughts could be helpful for making posts reasonable length  * We initially didn't give as much detail in some areas because the other (technical) author is time-limited and didn't think it was critical. The post editing process is extremely long for a post of this size and gravity, so we had to make decisions on when to stop iterating.  * Overall, I think the post still generated some interesting and valuable discussion, and I hope it at the very least causes people to think more critically about where they end up working.  * I am sad that Conjecture didn't engage with the post as much as we would have liked.  * I think it's difficult to
61
3d
8
You don't have to be an asshole just because you value honesty  In Kirsten's recent EA Lifestyles advice column [https://ealifestyles.substack.com/p/ea-lifestyles-advice-column] (NB, paywalled), an anonymous EA woman reported being bothered about men in the community whose "radical honesty" leads them to make inappropriate or hurtful comments:     An implication is that these guys may have viewed the discomfort of their women interlocutors as a (maybe regretful) cost of them upholding the important value of honesty. I've encountered similar attitudes elsewhere in EA - ie, people being kinda disagreeable/assholeish/mean under the excuse of 'just being honest'. I want to say: I don't think a high commitment to honesty inevitably entails being disagreeable, acting unempathetically, or ruffling feathers. Why? Because I don't think it's dishonest not to say everything that springs to mind. If that were the case, I'd be continually narrating my internal monologue to my loved owns, and it would be very annoying for them, I'd imagine.  If you're attracted to someone, and they ask "are you attracted to me?", and you say "no" - ok, that's dishonest. I don't think anyone should blame people for honestly answering a direct question. But if you're chatting with someone and you think "hmm, I'm really into them", and then you say that - I don't think honesty compels that choice, any more than it compels you to say "hold up, I just was thinking about whether I'd have soup or a burger for dinner". I don't know much about the Radical Honesty movement, but from this article [https://www.esquire.com/news-politics/a26792/honesty0707/], it seems like they really prize just blurting out whatever you think. I do understand the urge to do this: I really value self-expression. For example, I'd struggle to be in a situation where I felt like I couldn't express my thoughts online and had to self-censor a lot. But I want to make the case that self-expression (how much of what comes to min
8
11h
8
In a recent post on the EA forum (Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley [https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female]), I couldn't help but notice that a comments from famous and/or well-known people got lots more upvotes than comments by less well-known people, even though the content of the comments was largely similar. I'm wondering to what extent this serves as one small data point in support of the "too much hero worship/celebrity idolization in EA" hypothesis, and (if so) to what extent we should do something about it. I feel kind of conflicted, because in a very real sense reputation can be a result of hard work over time,[1] and it seems unreasonable to say that people shouldn't benefit from that. But it also seems antithetical to the pursuit of truth, philosophy, and doing good to weigh to the messenger so heavily over the message. I'm mulling this over, but it is a complex and interconnected enough issue that I doubt I will create any novel ideas with some casual thought. Perhaps just changing the upvote buttons to something more like this content creates nurtures a discussion space that lines up with the principles of EA? I'm not confident that would change much.     1. ^ Although not always. Sometimes a person is just in the right place at the right time. Big issues of genetic lottery and class matter. But in a very simplistic example, my highest ranking post on the EA forum is not one of the posts that I spent hours and hours thinking about and writing, but instead is one where I simply linked to a article about EA in the popular press and basically said "hey guys, look how cool this is!"
19
2d
SOME ALFRED WORKFLOWS (PRODUCTIVITY TOOLS) Alfred [https://www.alfredapp.com/] is a pretty powerful Mac app which lets you set hotkeys for a lot of things. Here are some workflows I use very frequently that I would recommend others try out. If you have favorite Alfred workflows, I'd love to hear about them in the comments! For the record, my Alfred app thinks that I have used a hotkey or Alfred expansion on average 9.3 times per day since March 2022. KILL I frequently get distracted or overwhelmed by having too many windows / tabs open, and losing track of what I was supposed to be doing. Whenever I notice that I am getting sidetracked, it’s extremely useful for me to have a ‘kill switch’, which just closes down all of my tabs and lets me start over. I now have a ‘kill’ Alfred hotkey set up to forcibly quit Chrome, Slack, Microsoft suite products, various programming apps, etc., so that I can start again from a blank slate. I use this hotkey multiple times a day on average while working. In theory, if I need to find a Chrome tab again, I can always go into my history / ‘recently closed tabs’ section — but I don’t ever recall needing to do this since installing this hotkey over a year ago.  I used to be the person with 1,000 Chrome tabs open at all times. I now think this is extremely damaging to my attention span and would lightly recommend other people who rely heavily on tabs in their workflow to try out a tab-limiting policy to see if it helps. TZ I work on a distributed team and have family around the world, so knowing the current time in a few key timezones is often extremely helpful for coordination purposes. The tz workflow [https://github.com/jaroslawhartman/TimeZones-Alfred#readme] is really nice for this. I added a ‘tzdetail’ hotkey, which opens up a URL to an external website [https://www.timeanddate.com/worldclock/meetingtime.html?iso=20230613&p1=224&p2=43&p3=136&p4=102&p5=240], displaying the time across the day in major cities in all the tim
4
12h
I just finished reading Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. I think the book is worth reading for anyone interested in truth and the figuring out what is real, but I especially liked the aspiration Mertonian norms [https://en.wikipedia.org/wiki/Mertonian_norms], a concept I had never encountered before, and which served as a theme throughout the book. I'll quote directly from the book to explain, but I'll alter the formatting a bit to make it easier to read: Although there are lots of differences between the goals of EA and the goals of science, in the areas of similarity I think there might be benefit in more awareness of these norms and more establishment of these as standards. Much of it seems to line up with broad ideas of scout mindset and epistemic rationality. My vague impressions are that the EA community generally  holds up fairly well when measured against these norms. I suspect there is some struggle with organized skepticism (ideas from high-status people often get accepted at face value) and there are a lot of difficulties with disinterestedness (people need resources to survive and to pursue their goals, and most of us have a desire for social desirability), but overall I think we are doing decently well.
Load more

Recent discussion

I have heard people who are uncertain about whether EA community building is the right move for them, given the increased prominence of AI Safety. I think that EA community building is the right choice for a significant number of people, and wanted to lay out why I believe this.

AI Safety Community building seems important

I’m excited to see AI Safety specific community building and I hope it continues to grow. This piece is not intended to claim that no-one should be working on AIS community building. Although CEA’s groups team is at an EA organisation, not an AI-safety organisation. I hope we can collaborate with AI Safety groups, as:

  • It would likely benefit both parties to synch on issues like data collection
  • I think there are lessons learned from
...

You say community building, but the specifics you describe seem more like recruiting and outreach. All three of those are good things, but I think conflating them is unhelpful. I think this is especially true because EA is already very aggressive at recruiting and mediocre at post-recruitment support.

2
Chris Leong
11h
I agree that EA community building could be a good option for some subset of people who want to ensure that AI goes well. There are some people who are well-positioned to do EA community building, but lack the skills to contribute towards AI governance or technical community building. Actually, I would go further and say that a much broader set of people would be suited for EA community building rather than anything AI safety specific. That said, there are some other options you should consider too. If AI safety is what you care about and you don’t have sufficient AI safety or governance knowledge to work on it directly, you may want to consider doing either x-risk or longtermist community building to narrow the focus. On the other hand, it’s also very important to consider the interests of those in your area. Additionally, you may also want to consider if you could have a greater impact by volunteering to provide ops support to someone working on AI safety movement building. That said, this requires you to be highly motivated and reliable - something is much harder than it seems - otherwise your impact might be minimal.
6
Brad West
6h
Being a highly motivated, reliable, and intelligent volunteer seems pretty underestimated as potential impact. With taking a salary position, your impact is something close to your superiority to the other person who might have your position. It's easy to imagine competent employees having a negative counterfactual impact due to displacing someone better. On the other hand, if you are a reliable, motivated, intelligent volunteer, you are simply providing excellent resources. Volunteering for promising projects that fall in the funding cracks could be quite EV for those without financial means to help important projects. But I would not recommend such volunteering unless you are serious... It's very easy to have negative value to an organization as a volunteer if you take from the time and resources from an organization and leaving shortly after, or stick around without actually doing much to help.

TLDR: I've assembled a mindmap of all EA (-related) entities I could find. You can access it at tinyurl.com/eamindmap. If I missed or misrepresented anything: leave a note (in the mindmap) and I'll add it or correct it. I've also added other existing lists of orgs to this post. 

Context

Starting my position at EA Netherlands as a co-director, I was advised by EA Pathfinder (now Successif) to make an overview of the EA space. In this way it would be easier to onboard, support community members and follow conversations. 

I made this mindmap of EA(-related) entities and this quickly got out of hand. I asked around, whether such overviews already exists. There were a few: 
 

Overview of EA organizations[1]

  • EA Org list (EA Opportunity Board) -  From this list, I use
...

There are lots of more AI Safety orgs and initiatives. Not sure if would be practical to add all.

See here for manybof them: aisafety.world

I wish that it was possible for agree votes to be disabled on comments that aren't making any claim or proposal. When I write a comment saying "thank you" or "this has given me a lot to think about" and people agree vote (or disagree vote!), it feels to odd: there isn't even anything to agree or disagree with there!

2
Joseph Lemien
1h
I'm glad that you mentioned this. This makes sense to me, and I think it weakens the idea of this particular circumstance as an example of "celebrity idolization." If the EA forum had little emoji reactions for this made me change my mind or this made me update a bit, I would use them here. 😁
2
Joseph Lemien
1h
Could I bother you to rephrase "$P$ accidentally enforces $\not P$"? I don't know what you mean by using these symbols.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Principles-based regulation

A useful 1D projection for regulatory frameworks is principles vs. rules-based regulation

Principles are high-level, overarching objectives that the regulatory framework seeks to achieve. They are usually broad and relatively abstract, which means they can be applied to a wide range of specific situations.  For example, we can take a look at the UK Financial Conduct Authority (FCA)’s “Principles for Business”, which contains principles such as:

A firm must conduct its business with due skill, care, and diligence

A firm must pay due regard to the interests of its customers and treat them fairly

A firm must manage conflicts of interest fairly, both between itself and its customers and between a customer and another client.

On the other hand, rules-based regulation attempts to set up numerous concrete rules that cover as...

Online discussion. Newcomers welcome!

Please register here to attend.

Are there concepts in Effective Altruism you want to explore more fully? Or do you have particular comments or insights about certain topics? Join our foundational topics group and discuss a different theme each month!

This June, we will discuss the concept of Crucial Considerations. We will explore how this concept has been used, review the benefits and critiques, dive into complicating factors with the idea, and more!

****

EA NYC:
Please find all EA NYC event information - including our Code of Conduct, food policy, covid policy, and information on past and future events - here on our website: https://www.effectivealtruism.nyc/events

You can also find us on Facebook, Meetup, and Eventbrite!
http://facebook.com/groups/eanyc/events
https://www.meetup.com/effective-altruism-nyc/events/
https://www.eventbrite.com/o/effective-altruism-nyc-55938838923

For those new to effective altruism, here are a couple of good introductions. In short, EA is about using evidence to carefully analyze how, given limited resources, we can help others the *most*.
https://www.youtube.com/watch?v=48VAQtGmfWY

Dov
2h10

this event was great, thank you rocky and alex for setting it up

Crossposted to LessWrong.

This is the second post in this sequence and covers Conjecture. 

Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. 

We shared a draft of this document with Conjecture for feedback prior to publication (and include their response below). We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post. We'd like to invite others to share their thoughts in the comments, or anonymously via this form

Key Takeaways

For those...

Does this really make you feel safe? This reads to me as a possible reason for optimism, but hardly reassures me that the worst won’t happen or that this author isn’t just failing to imagine what could lead to strong instrumental convergence (including different training regimes becoming popular).

9
Rebecca
5h
I don’t think the issue is that they have an opinion, rather that they have the same opinion - like, ‘all the researchers have the same p(doom), even the non-researchers too’ is exactly the sort of thing I’d imagine hearing about a cultish org
8
Linch
5h
I also consider Conjecture's official reply to be rather defensive, but I guess it could just be cultural differences.

I think running criticism past the people whose work is being criticized often helps make the criticism more productive, but it can be difficult. To make it easier, I'm sharing a step-by-step guide ⬇️ you can use. 

Please don’t feel like you have to read this whole guide or be super thorough if you’re thinking of running a draft past people. Don’t let perfect be the enemy of the good.

Outline of my suggestions on how to run criticism past people

  1. Make some key decisions about how you’ll share the criticism.
  2. Find an email address or reach out via the Forum.
    1. You can do this anonymously.
  3. Include relevant information — like a description of your timeline and boundaries on what you will and won’t be doing (like “I probably won’t have the capacity to respond to private replies before
...

In this post the criticizer gave the criticizee an opportunity to reply in-line in the published post—in effect, the criticizee was offered the last word. I thought that was super classy, and I’m proud to have stolen that idea on two occasions (1,2).

If anyone’s interested, the relevant part of my email was:

You can leave google docs margin comments if you want, and:

  • If I’m just straight-up wrong about something, or putting words in your mouth, then I’ll just correct the text before publication.
  • If you are leave a google docs comment that’s more like a counte
... (read more)
2
Chris Leong
11h
“ Are you up for modifying your draft based on private responses from the people whose work you're criticizing?” If you’ve solicited feedback, I would suggest that your obligation to modify your draft depends on how strong their response is. If it’s clear that your draft contains significant factual inaccuracies, then publishing reflects poorly on you (though you shouldn’t automatically feel a need to respond to attempts to dispute a fact, as often they may be pointing out something minor or you may have reason to doubt their account). If the average reader after reading both your post and their response will come away with an impression that your critique was ideological or one-sided, then you should probably edit your draft, though there might be exceptions (say if someone is traumatised to the point where they lack the ability to be objective, so the alternative would be not publishing at all). To be clear, I’m not saying that you need to edit just because they’ve made some good or reasonable points. I’m more suggesting that you shouldn’t ignore points if this would make a good faith reader feel like your article was pushing them away from the truth.
4
Lizka
11h
Appendix 3. Other assumptions for the post Following up on this note [https://forum.effectivealtruism.org/posts/kjcMZEzksusCHfHiF/productive-criticism-running-a-draft-past-the-people-you-re#Note__criticism_can_be_incredibly_useful]: below is a non-exhaustive list of other potentially-relevant assumptions that I won’t bother discussing but that I might rely on (to different degrees) in this post:  1. Criticism of someone’s work is more likely than other kinds of critical writing (like disagreement with someone’s written arguments) to be wrong or misleading because of an information asymmetry. It’s pretty common for criticism to be at least somewhat misleading (even if it still has significant and useful points).  1. When you’re writing about work that isn’t an entirely public output (communication/research that everyone can access), you’re more likely to simply lack context or be wrong about what you’re writing about. You can’t just reference specific parts of the work; the person or people who’ve done the work know things that you don’t.  2. It’s useful to help people orient towards criticism of their work in healthy and positive ways, which can mean trying to make the process less stressful for them. This isn’t against the critical spirit [https://forum.effectivealtruism.org/posts/CkikpvdkkLLJHhLXL/supportive-scepticism-in-practice] or the like. 3. Wrong or somewhat misleading criticism of people’s work can be pretty harmful, especially if a response from those criticized doesn’t come right away. 1. Readers come away with incorrect (and extremely negative) opinions of the work being criticized.  1. It can be harder for readers to tell for themselves whose side or claims they should believe than if this were a disagreement about public content (similar to the dynamics outlined in Assumption 1); they, too, lack information and will be potential

I’m Luke Freeman, and I currently serve as the executive director of Giving What We Can (GWWC). You’re welcome to ask me anything! I’ll start answering questions on Thursday June 15th.

Logistics/practical instructions: 

  • Please post your questions as comments on this post. The earlier you share your questions, the easier it will be for me to get to them.
  • Please upvote questions you'd most like answered.
  • I’ll start answering questions on June 15th. Questions posted after that are less likely to get answers.
  • I’m excited about this, but can’t commit to answering all the questions. If you want to share many questions, consider sharing and/or upvoting which ones you’re particularly interested in.
  • (This is an “AMA” — you can explore others here.)
     

Some context: 

  • I’ve been leading the team at Giving What We Can since 2020.
  • I’ve been
...

I want to donate as much as I can, but how much is too much/ ultimately counterproductive?

For example is it worth to settle for a noticeably worse (but still adequate) phone service provider for the sake of donating an extra $6 (i.e. 3 bed nets) a month? 


Or is sacrificing at that scale too extreme in your opinion?

5
Vaidehi Agarwalla
9h
Do you think "effective giving" is a good name for the giving side of EA? What other names have you considered (if any)?

How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:

  1. The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history.
  2. The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future.
  3. AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons. 

While none of these ideas are new, my goal is to provide a single article...

I'm also a little surprised you think that modeling when we will have systems using similar compute as the human brain is very helpful for modeling when economic growth rates will change.

In this post, when I mentioned human brain FLOP, it was mainly used as a quick estimate of AGI inference costs. However, different methodologies produce similar results (generally within 2 OOMs). A standard formula to estimate compute costs is 6*N per forward pass, where N is the number of parameters. Currently the largest language models have are estimated to be between 1... (read more)

2
Matthew_Barnett
6h
I agree. A better phrasing could have emphasized that, although both theory and compute are required, in practice, the compute part seems to be the crucial bottleneck. The 'theories' that drive deep learning are famously pretty shallow, and most progress seems to come mostly from tinkering, scaling, and writing code to be more efficient. I'm not aware of any major algorithmic contribution that would not have been possible if it were not for some fundamental analysis from deep learning theory [https://deeplearningtheory.com/] (though perhaps these happen all the time and I'm just not sufficiently familiar to know). I think the alternative theory of a common cause is somewhat plausible, but I don't see any particular reason to believe in it. If there were a common factor that caused progress in computer hardware and algorithms to proceed at a similar rate, why wouldn't other technologies that shared that cause grow at similar rates? Hardware progress has been incredibly fast over the last 70 years -- indeed, many people say that the speed of computers is by far the most salient difference between the world in 1973 and 2023. And yet algorithmic progress has apparently been similarly rapid, which seems hard to square with a theory of a general factor that causes innovation to proceed at similar rates. Surely there are such bottlenecks that slow down progress in both places, but the question is what explains the coincidence in rates. I expect innovation in AI in the future will take a different form than innovation in the past. When innovating in the past, people generally found a narrow tool or method that improved efficiency in one narrow domain, without being able to substitute for human labor across a wide variety of domains. Occasionally, people stumbled upon general purpose technologies [https://en.wikipedia.org/wiki/General-purpose_technology] that were unusually useful across a variety of situations, although by-and-large these technologies are quite narrow c
4
Matthew_Barnett
7h
Thanks for the comment. I think you're right that my post neglected to discuss these considerations. On the other hand, my bottom-line probability distribution at the end of the post deliberately has a long tail to take into account delays such as high cost, regulation, fine-tuning, safety evaluation, and so on. For these reasons, I don't think I'm being too aggressive. Regarding the point about high cost in particular: it seems unlikely to me that TAI will have a prohibitively high inference cost. As you know, Joseph Carlsmith estimated [https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/]brain FLOP with a central estimate of 10^15 FLOP/s. This is orders of magnitude higher than the cost of LLMs today, and it would still be comparable to prevailing human wages at current hardware prices. In addition, there are more considerations that push me towards TAI being cheap: 1. A large fraction of our economy can be automated without physical robots. The relevant brain anchor for intellectual tasks is arguably the cerebral cortex rather than the full human brain. And according to Wikipedia [https://en.wikipedia.org/wiki/Cerebral_cortex], "There are between 14 and 16 billion neurons in the human cerebral cortex." It's not clear to me how many synapses there are in the cerebral cortex, but if the synapse-to-neuron ratio is consistent throughout the brain, then the inference cost of the cerebral cortex is plausibly about 1/5th the inference cost of the whole brain. 2. The human brain is plausibly undertrained relative to its size, due to evolutionary constraints that push hard against delaying maturity in animals. As a consequence, ML models with brain-level efficiency can probably match human performance at much lower size (and thus, inference cost). I currently expect this consideration to mean that the human brain is 2-10x larger than "necessary". 3. The chin