New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...

Continue reading

Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]


I'm not sure I understand this claim, and I can't see that it's supported by the cited paper. 

Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect. 

4
David Thorstad
14h
Here's the talk version for anyone who finds it easier to listen to videos: 
Tiresias commented on Killing the moths 11m ago
173
10

This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless.

Content warning: description of animal death.

I live in a ...

Continue reading

This post was moving, thank you for writing it. I have dealt with a similar situation, and found it impossible. I've dealt with that impossibility by trying to justify what I've done, and absolve myself. Your post is forthright: you killed the moths. We can move on from it, but we don't need to rationalize it.

2
RedStateBlueState
22m
This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it's really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.

Please note that my previous post took the following positions:

1. That SBF did terrible acts that harmed people.

2. That it was necessary that he be punished. To the extent that it wasn't implied by the previous comment, I clarify that what he did was illegal (EDIT: which would involve a finding of culpable mental states that would imply that his wrongdoing was no innocent or negligent mistake).

3. The post doesn't even take a position as to whether the 25 years is an appropriate sentence.

All of the preceding is consistent with the proposition that he also a... (read more)

6
Ben Millwood
29m
While I see what you're saying here, I prefer evil to be done inconsistently rather than consistently, and every time someone merely gets what they deserve instead of what some unhinged penal system (whether in the US or elsewhere) thinks they deserve seems like a good thing to me. (I don't personally have an opinion on what SBF actually deserves.)
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]!

We’d like to help people in EA find impactful work, so we’ve set up this thread, and another called Who's hiring? (we did this last in 2022[2]).

Consider...

Continue reading

TLDR: I write meta-analyses on a contract basis, e.g. here, here, and here. If you want to commission a meta-analysis, and get a co-authored paper to boot, I'd love to hear from you. 

Skills & background: I am a nonresident fellow at the Kahneman-Treisman Center at Princeton and an affiliate at the Humane and Sustainable Food Lab at Stanford. Previously I worked at Glo Foundation, Riskified, and Code Ocean.

Location/remote: Brooklyn.

Resume/CV/LinkedIn: see here.

Email/contact: setgree at gmail dot com

Other notes: I'm reasonably subject-agnostic, thou... (read more)

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than...

Continue reading

Thanks for writing and sharing this Alexander – I thought it was an unusually helpful and transparent post.

26
NickLaing
11h
I really appreciated this report, it seemed one of the most honest and open communications to come out of Open Philanthropy, and it helped me connect with your priorities and vision. A couple of specific things I liked. I appreciated the comment about the Wytham Abby purchase, recognising the flow on effects Open Phil decisions can have on the wider community, and even just acknowledging a mistake - something which is both difficult and uncommon in leadership. "But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building." I also liked the admission on slow movement on lead exposure. I had wondered why I hadn't been hearing more on that front given the huge opportunities there and the potential for something like the equivalent of a disease "elimination" with a huge effect on future generations. From what I've seen, my instinct is that it had potential to perhaps be a more clear/urgent/cost-effective focus than other Open Phil areas like air quality. All the best for this year!

Hi folks,

I have a business idea that I'm excited about. I'd like to meet potential co-founders or collaborators. 

I'm an EA, and will donate a very large chunk of any money I make. 

Is there a platform, Facebook group, Whatsapp group, organisation, etc, where I ...

Continue reading

Thanks, Seth!

Applications are now open (here)! Deadline: 20th October 2024 (11:59 PM EDT).

EA Global brings together a wide network of people who have made helping others a core part of their lives. Speakers and attendees share new thinking and research in the field of effective altruism and coordinate on global projects.

Application details

  • Application deadline: 20th October 2024 (11:59 PM EDT)
  • Default ticket price: £400 GBP
  • Discounts are available — you can select from a range of ticket price options during checkout
  • All applications for this event will receive a response by 23rd October 2024 at the latest. Most applications receive a response within two weeks.

Travel expenses

We are prepared to reimburse travel expenses for some attendees. You may apply for travel support in the application form if you are unable to attend without support. Please check our travel support policy for more details.

Should you ...

Continue reading

I am following the advice of Aaron Gertler and writing a post about my job. 80000 hours has independent career path pages dedicated to getting an economics PhD and doing academic research, but the specifics of my personal experience may be of interest. Plus, it was fun ...

Continue reading
3
Kevin Kuruc
3h
Hi Vasco, thanks for reading. And thanks for your dedication to animals :) I've seen a few of your posts on this topic. If you think you'll be interested in economics PhD programs, I would encourage you to aim to apply for the next cycle (Dec '24/Jan '25). There's a lot of randomness in the process, and your grades will matter more than RA-experience, so I'd say go for it as soon as you can, given how long these programs are. If you don't get in anywhere, you can be applying for RA-ships in the meantime, and take one if that's your best option before trying again the following cycle. You should be able to determine within the next 10 months whether you're interested in the material enough to set off on the PhD, and I wouldn't waste any time applying if you decide you are. However, I might recommend engaging with economics research directly, rather than Marginal Revolution courses. That will be a better flavor of what you'll do in a PhD. Even if don't understand everything in cutting edge research articles, you'll be able to get a sense for how problems are discussed and debated, which will be a clue as to whether or not its a field you're excited by. (Maybe start here, or here) Good luck!

Thanks for the advice, Kevin!

Cross-posted on LessWrong.

This post is part of a series by Convergence Analysis’ AI Clarity team.

Justin Bullock and Elliot Mckernon have recently motivated AI Clarity’s focus on the notion of transformative AI (TAI). In an earlier post, Corin Katzke introduced...

Continue reading

Hi Jack, thanks for your comment! I think you've raised some really interesting points here. 

I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn't something I have spent much time thinking about yet - indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters - including th... (read more)

1
Zershaaneh Qureshi
2h
Thank you for this feedback - these are good points! Glad you liked the article.  The way I approached collecting personal judgement based predictions was roughly as follows:  1. I came up with an initial list of people who are well known in this space  2. I did some digging on each person on that list to see if any of them had made a prediction in the last few years about the timeline to TAI or similar (some had, but many of them hadn’t)  3. I reviewed the list of results for any obvious gaps (in terms of either demographic or leanings on the issue) and then iterated on this process It was in step 3 that I ended up seeking out Robin Hanson’s views. Basically, from my initial list, I ended up with a sample that seemed to be leaning pretty heavily in one direction. I suspected that my process had skewed me towards people with shorter timelines - as someone who is very new to the AI safety community, the people who I have become aware of most quickly have been those who are especially worried about x-risks from AI emerging in the near future.  I wanted to consciously make up for that by deliberately seeking out a few predictions from people who are known to be sceptical about shorter timelines. Robin Hanson may not be as renowned as some of the other researchers included, but did his arguments did receive some attention in the literature and seemed to be worth noting. I thought his perspective ought to be reflected, to provide an example of the Other Position. And as you point out - many sceptics aren’t in the business of providing numerical predictions. The fact that Hanson had put some rough numbers to things made his prediction especially useful for loose comparison purposes.  I agree with what you say about personal predictions needing to be taken with a grain of salt, and the direction they might skew things in, etc. Something I should have perhaps made clearer in this article: I don’t view each source mentioned here as a piece of evidence with equal weig
1
Zershaaneh Qureshi
2h
Thank you for this helpful response, Pablo! This is a really interesting result to note. I was not aware of this when writing my post, but I'll plan to include Samotsvety forecast results in future work on this subject, and when I come to write up my findings more formally.