EK

emre kaplan🔸

1877 karmaJoined
Interests:
Forecasting

Comments
175

The UK offers better access as a conference location for international participants compared to the US or the EU.

I'm being invited to conferences in different parts of the world as a Turkish citizen, and visa processes for the US and the EU have gotten a lot more difficult lately. I'm unable to even get a visa appointment for several European countries, and my appointment for the US visa was scheduled 16 months out. I believe the situation is similar for visa applicants from other countries. The UK currently offers the smoothest process with timelines of only a few weeks. Conference organizers that seek applications from all over the world could choose the UK over other options.

I wonder what can be done to make people more comfortable praising powerful people in EA without feeling like sycophants.

A while ago I saw Dustin Moskovitz commenting on the EA Forum. I thought about expressing my positive impressions of his presence and how incredible it was that he even engaged. I didn't do that because it felt like sycophancy. The next day he deleted his account. I don't think my comment would have changed anything in that instance, but I still regretted not commenting.

In general, writing criticism feels more virtuous than writing praise. I used to avoid praising people who had power over me, but now that attitude seems misguided to me. While I'm glad that EA provided an environment where I could feel comfortable criticising the leadership, I'm unhappy about ending up in a situation where occupying leadership positions in EA feels like a curse to potential candidates.

Many community members agree that there is a leadership vacuum in EA. That should lead us to believe people in leadership positions should be rewarded more than they currently are. Part of that reward could be encouragement and I am personally committing to comment on things I like about EA more often.

I'm confused how the following scenario is consistent with meeting the resolution criteria. The resolution criteria imply at least 50% decrease in AI company revenues:

Remmelt thinks there will likely be a crash by 2029, since AI companies are burning too much cash on data centers to run products undergoing commodification. He thinks it’s most plausible though that the crash happens on the investment side, and that model subscription revenues could end up being mostly maintained.

Not inspiring but fun and relevant:

Don't do love, don't do friends
I'm only after success
Don't need a relationship
I'll never soften my grip

Don't want cash, don't want card
Want it fast, want it hard
Don't need money, don't need fame
I just want to make a change

I just wanna change, I just wanna change
I just wanna change, I just wanna change
I just wanna change

I know exactly what I want and who I want to be
I know exactly why I walk and talk like a machine
I'm now becoming my own self-fulfilled prophecy
Oh! Oh no! Oh no! Oh no! Oh!

I don't think the current systems are able to pass the Turing test yet. Quoting from Metaculus admins:

"Given evidence from previous Loebner prize transcripts – specifically that the chatbots were asked Winograd schema questions – we interpret the Loebner silver criteria to be an adversarial test conducted by reasonably well informed judges, as opposed to one featuring judges with no or very little domain knowledge."

Thank you for the detailed reply. I'm personally not satisfied by moral theories that attribute intrinsic moral significance to species-membership but I won't be available for further discussion.

Utilitarianism is one of the more "moderate" views in the field because at the very least it admits that individual insects have less welfare capacity than typical humans. Unitarian rights-based theories claim that right to life is equally strong for all sentient beings, which make insects an even bigger priority. What is your view on moral patienthood?

Setting aside general arguments about companies' conflicts of interest regarding AI projections, I want to note that the revenue projections of these companies do not assume straight lines over trends. 

Different sources suggest OpenAI does not expect to be profitable until 2029, and its revenue projection for 2029 is around $100-120 billion. Similarly, Anthropic expects $34.5 billion revenue in 2027. These are very significant numbers, but for comparison Microsoft has an annual revenue of $250 billion. When I see the headlines "AGI by 2027", I expect something far scarier than $34.5 billion annual revenue. Of course one can argue that business deployment of AI takes time, companies can't capture all the value they produce and so on. Nonetheless, I think these numbers are helpful to keep things in perspective.

Thanks a lot for your comments Alex. I really appreciate it as I want to develop my thinking on topic. Thanks a lot for the suggestions as well.

Load more