You probably know much more about U.S. politics than I do, so I can't engage deeply on whether these things are really happening or how unusual they might be.
However, I suspect that much of what you're attributing to the Democratic party is actually due to a broader trend of U.S. elites becoming more left-leaning and Democrat-voting. Even if I agreed that this shift was bad for democracy, I'm not sure how voting for Trump would fix it in the long run. A Trump presidency would likely push elites even further toward left-leaning politics.
Regarding point 1.
You're framing the situation as a choice between 'Trump, who is willing to subvert democracy' and 'the Democratic Party, who is willing to subvert democracy'. This framing implicitly acknowledges that Harris is not (especially) willing to subvert democracy.
It's very plausible to believe that both the Democratic Party and the Republican Party are roughly equally willing to subvert democracy, especially given the significant influence Trump has on the Republican Party.
It then becomes a choice between:
Trump and the Republican Party, who are both willing to subvert democracy
vs.
The Democratic Party, who are willing to subvert democracy, and Harris, who is not.
In this comparison, Harris's apparent commitment to democratic norms becomes the deciding factor in how you evaluate the overall democraticness of the choices.
We also encourage you to share this opportunity with others who may be a good fit. If we accept any fellow we contacted based on your recommendation, you'll receive $100 for each accepted candidate. The recommendation form is here.
Pivotal Research is looking for a Operations & Community Manager for our 2024 Research Fellowship.
Employment Period: As soon as possible (subject to availability) to September 2024
Location: London (preferred in-person)
Employment Work-Load: 0.7 – 1 FTE
Salary Range: GBP 4,000 - 5,000 per month for 1 FTE (depending on background and experience)
Deadline: April 17th, 23:59 (CET+1), Apply Here
For the 2024 Research Fellowship, Pivotal Research (previously known as CHERI) is looking for a dedicated Operations & Community Manager to join the team. This role presents a unique opportunity to make a substantial impact in the global catastrophic risk (GCR) field by providing crucial support to the fellowship. The Operations & Community Manager will enjoy significant autonomy and decision-making authority, enabling them to play a key role in ensuring the success of the research fellowship. This position is ideal for individuals passionate about operational excellence and community engagement, aiming to contribute meaningfully to the advancement of GCR research.
Hi Oscar, thanks for the question! To clarify, only the fellowship has moved to the UK, not our entire organisation.
We've thought a lot about the pros and cons of moving from Switzerland and largely agree with your points.[1] The main driver for our decision was Switzerland's comparatively small GCR network.
We see the fellowship as an opportunity to immerse fellows in a rich intellectual environment, which London’s – and especially LISA’s – GCR ecosystem offers. Our experience of running fellowships outside of established hubs suggests that fellowships alone are not a great vehicle to build a new GCR hub due to their seasonal nature and limited ability to retain people long-term. Nevertheless, we do see significant value in diversification and are considering future projects outside established GCR hubs for this reason.
Hope this explains our thinking, happy to answer more questions.
Mentor access isn't a huge concern for us, since we expect most mentor-mentee interactions to happen virtually either way.
"Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025)."
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn't find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI "proposing" the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
Would the information in this quote fall under any of the Freedom of Information Act (FOIA) exemptions, particularly those concerning national security or confidential commercial information/trade secrets? Or would there be other reasons why it wouldn't become public knowledge through FOIA requests?
As far as I understand the plan is for it to be a (sort of?) national/governmental institute.[1] The UK government has quite a few scientific institutes. It might be the first in the world of that kind.
In this article from early October, the phrasing implies that it would be tied to the UK government:
Sunak will use the second day of Britain's upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s plans.
The body would assist governments in evaluating national security risks associated with frontier models, which are the most advanced forms of the technology.
The idea is that the institute could emerge from what is now the United Kingdom’s government’s Frontier AI Taskforce[...].
Since Longtermism as a concept doesn't seem widely appealing, I wonder how other time-focused ethical frameworks fare, such as Shorttermism (Focusing on immediate consequences), Mediumtermism (Focusing on foreseeable future), or Atemporalism (Ignoring time horizons in ethical considerations altogether).
I'd guess these concepts would also be unpopular, perhaps because ethical considerations centered on timeframes feel confusing, too abstract or even uncomfortable for many people.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.