All posts

New & upvoted

Thursday, 30 May 2024
Thu, 30 May 2024

Frontpage Posts

Quick takes

14
saulius
15h
5
EAG and covid [edit: solved] I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly: 1. Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people. 2. Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult) 3. Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d charge it. Perhaps it’s better not to go if I’d have to do that. 4. Don't go. I don't mind doing that if that's the best thing altruistically. In all cases, I can inform all my 1-1s (I have ~18 tentatively planned) that I have covid. I can also attend only if I test negative in the morning of a day. This would be the third EAG London in a row where I’d cancel all my meetings last minute because I might be contagious with covid, although I’m probably not and I feel totally good. This makes me a bit frustrated and biased, which is partly why I’m asking for advice here. The thing is that I think that very few people are still that careful and still test but perhaps they should be, I don’t know. There are vulnerable people and long covid can be really bad. So if I’m going to take precautions, I’d like others reading this to also test and do the same, at least if you have a reason to believe you might have covid. EDIT: ok, I've cancelled my meetings on friday, will cancel saturday ones if I test positive on friday. I won't go inside unless I've been testing negative for at least 24 hours and I will try to wear a mask as much as possible if I do. I won't attend any workshops or talks. Thank you for the comments, they made it much easier.
In late June, the Forum will be holding a debate week (almost definitely) on the topic of digital minds. Like the AI pause debate week, I’ll encourage specific authors who have thoughts on this issue to post, but all interested Forum users are also encouraged to take part. Also, we will have an interactive banner to track Forum user’s opinions and how they change throughout the week.  I’m still formulating the exact debate statement, so I’m very open for input here! I’d like to see people discuss: whether digital minds should be an EA cause area, how bad putting too much or too little effort into digital minds could be, and whether there are any promising avenues for further work in the domain. I’d like a statement which is fairly clear, so that the majority of debate doesn’t end up being semantic.  The debate statement will be a value statement of the form ‘X is the case’ rather than a prediction 'X will happen before Y'. For example, we could discuss how much we agree with the statement ‘Digital minds should be a top 5 EA cause area’-- but this is specific suggestion is uncomfortably vague.  Do you have any suggestions for alternative statements? I’m also open to feedback on the general topic. Feel free to dm rather than comment if you prefer. 

Wednesday, 29 May 2024
Wed, 29 May 2024

Frontpage Posts

Quick takes

In the absence of a poll feature, please use the agree/disagree function and the "changed my mind" emoji in this quick take to help me get a sense for EA's views on a statement: "Working on capabilities within a leading AI Lab makes someone a bad person" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Agree = strongly agree or somewhat agree Disagree = strongly disagree or somewhat disagree ▲ reaction emoji = unsure / neither agree nor disagree downvote = ~ this is a bad and divisive question upvote = ~ this is a good question to be asking

Tuesday, 28 May 2024
Tue, 28 May 2024

Quick takes

The Animal Welfare Department at Rethink Priorities is recruiting volunteer researchers to support on a high-impact project! We’re conducting a review on interventions to reduce meat consumption, and we’re seeking help checking whether academic studies meet our eligibility criteria. This will involve reviewing the full text of studies, especially methodology sections. We’re interested in volunteers who have some experience reading empirical academic literature, especially postgraduates. The role is an unpaid volunteer opportunity. We expect this to be a ten week project, requiring approximately five hours per week. But your time commitment can be flexible, depending on your availability. This is an exciting opportunity for graduate students and early career researchers to gain research experience, learn about an interesting topic, and directly participate in an impactful project. The Animal Welfare Department will provide support and, if desired, letters of experience for volunteers. If you are interested in volunteering with us, contact Ben Stevenson at bstevenson@rethinkpriorities.org. Please share either your CV, or a short statement (~4 sentences) about your experience engaging with empirical academic literature. Candidates will be invited to complete a skills assessment. We are accepting applications on a rolling basis, and will update this listing when we are no longer accepting applications. Please reach out to Ben if you have any questions. If you know anybody who might be interested, please forward this opportunity to them!
Very quick thoughts on setting time aside for strategy, planning and implementation, since I'm into my 4th week of strategy development and experiencing intrusive thoughts about needing to hurry up on implementation; * I have a 52 week LTFF grant to do movement building in Australia (AI Safety) * I have set aside 4.5 weeks for research (interviews + landscape review + maybe survey) and strategy development (segmentation, targeting, positioning), * Then 1.5 weeks for planning (content, events, educational programs), during which I will get feedback from others on the plan and then iterate it.  * This leaves me with 46/52 weeks to implement ruthlessly. In conclusion, 6 weeks on strategy and planning seems about right. 2 weeks would have been too short, 10 weeks would have been too long, this porridge is juuuussttt rightttt. keen for feedback from people in similar positions.

Monday, 27 May 2024
Mon, 27 May 2024

Frontpage Posts

Quick takes

EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post  mentioned a lot of people and organizations, so it seemed like useful data. I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic.  This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out. Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive. It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k's mention was negative. I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I'm very glad I solicited comments, and found the process easier than expected. 
I highly recommend the book "How to Launch A High-Impact Nonprofit" to everyone. I've been EtG for many years and I thought this book wasn't relevant to me, but I'm learning a lot and I'm really enjoying it.
2
Emrik
3d
1
If evolutionary biology metaphors for social epistemology is your cup of tea, you may find this discussion I had with ChatGPT interesting. 🍵 (Also, sorry for not optimizing this; but I rarely find time to write anything publishable, so I thought just sharing as-is was better than not sharing at all. I recommend the footnotes btw!) Glossary/metaphors * Howea palm trees ↦ EA community * Wind-pollination ↦ "panmictic communication" * Sympatric speciation ↦ horizontal segmentation * Ecological niches ↦ "epistemic niches" * Inbreeding depression ↦ echo chambers * Outbreeding depression (and Baker's law) ↦ "Zollman-like effects" * At least sorta. There's a host of mechanisms mostly sharing the same domain and effects with the more precisely-defined Zollman effect, and I'm saying "Zollman-like" to refer to the group of them. Probably I should find a better word. Background Once upon a time, the common ancestor of the palm trees Howea forsteriana and Howea belmoreana on Howe Island would pollinate each other more or less uniformly during each flowering cycle. This was "panmictic" because everybody was equally likely to mix with anybody else. Then, on a beautifwl sunny morning smack in the middle of New Zealand and Australia, the counterfactual descendants had had enough. Due to varying soil profiles on the island, they all had to compromise between fitness for each soil type—or purely specialize in one and accept the loss of all seeds which landed on the wrong soil. "This seems inefficient," one of them observed. A few of them nodded in agreement and conspired to gradually desynchronize their flowering intervals from their conspecifics, so that they would primarily pollinate each other rather than having to uniformly mix with everybody. They had created a cline. And a cline once established, permits the gene pools of the assortatively-pollinating palms to further specialize toward different mesa-niches within their original meta-niche. Given that a crossbreed between palms adapted for different soil types is going to be less adaptive for either niche,[1] you have a positive feedback cycle where they increasingly desynchronize (to minimize crossbreeding) and increasingly specialize. Solve for the general equilibrium and you get sympatric speciation.[2] Notice that their freedom to specialize toward their respective mesa-niches is proportional to their reproductive isolation (or inversely proportional to the gene flow between them). The more panmictic they are, the more selection-pressure there is on them to retain 1) genetic performance across the population-weighted distribution of all the mesa-niches in the environment, and 2) cross-compatibility with the entire population (since you can't choose your mates if you're a wind-pollinating palm tree).[3] From evo bio to socioepistemology > I love this as a metaphor for social epistemology, and the potential detrimental effects of "panmictic communication". Sorta related to the Zollman effect, but more general. If you have an epistemic community that are trying to grow knowledge about a range of different "epistemic niches", then widespread pollination (communication) is obviously good because it protects against e.g. inbreeding depression of local subgroups (e.g. echo chambers, groupthink, etc.), and because researchers can coordinate to avoid redundant work, and because ideas tend to inspire other ideas; but it can also be detrimental because researchers who try to keep up with the ideas and technical jargon being developed across the community (especially related to everything that becomes a "hot topic") will have less time and relative curiosity to specialize in their focus area ("outbreeding depression"). > > A particularly good example of this is the effective altruism community. Given that they aspire to prioritize between all the world's problems, and due to the very high-dimensional search space generalized altruism implies, and due to how tight-knit the community's discussion fora are (the EA forum, LessWrong, EAGs, etc.), they tend to learn an extremely wide range of topics. I think this is awesome, and usually produces better results than narrow academic fields, but nonetheless there's a tradeoff here. > > The rather untargeted gene-flow implied by wind-pollination is a good match to mostly-online meme-flow of the EA community. You might think that EAs will adequately speciate and evolve toward subniches due to the intractability of keeping up with everything, and indeed there are many subcommunities that branch into different focus areas. But if you take cognitive biases into account, and the constant desire people have to be *relevant* to the largest audience they can find (preferential attachment wrt hot topics), plus fear-of-missing-out, and fear of being "caught unaware" of some newly-developed jargon (causing people to spend time learning everything that risks being mentioned in live conversations[4]), it's unlikely that they couldn't benefit from smarter and more fractal ways to specialize their niches. Part of that may involve more "horizontally segmented" communication. Tagging @Holly_Elmore because evobio metaphors is definitely your cup of tea, and a lot of it is inspired by stuff I first learned from you. Thanks! : ) 1. ^ Think of it like... if you're programming something based on the assumption that it will run on Linux xor Windows, it's gonna be much easier to reach a given level of quality compared to if you require it to be cross-compatible. 2. ^ Sympatric speciation is rare because the pressure to be compatible with your conspecifics is usually quite high (Allee effects ↦ network effects). But it is still possible once selection-pressures from "disruptive selection" exceed the "heritage threshold" relative to each mesa-niche.[5] 3. ^ This homegenification of evolutionary selection-pressures is akin to markets converging to an equilibrium price. It too depends on panmixia of customers and sellers for a given product. If customers are able to buy from anybody anywhere, differential pricing (i.e. trying to sell your product at above or below equilibrium price for a subgroup of customers) becomes impossible. 4. ^ This is also known (by me and at least one other person...) as the "jabber loop": > This highlight the utter absurdity of being afraid of having our ignorance exposed, and going 'round judging each other for what we don't know. If we all worry overmuch about what we don't know, we'll all get stuck reading and talking about stuff in the Jabber loop. The more of our collective time we give to the Jabber loop, the more unusual it will be to be ignorant of what's in there, which means the social punishments for Jabber-ignorance will get even harsher. 5. ^ To take this up a notch: sympatric speciation occurs when a cline in the population extends across a separatrix (red) in the dynamic landscape, and the attractors (blue) on each side overpower the cohering forces from Allee effects (orange). This is the doodle I drew on a post-it note to illustrate that pattern in different context: I dub him the mascot of bullshit-math. Isn't he pretty?
How useful is pre university student collations of research papers in biorisk? I've been working on some papers (for fun) collating research in the biosafety field, but obviously have no experience/degrees and it is secondary analysis- how useful would posting these 'rough' papers be helpful. They mainly focus on antibiotic resistance, biosafety and pandemic risk from gain of function research?

Saturday, 25 May 2024
Sat, 25 May 2024

Frontpage Posts

Quick takes

Besides Ilya Sutskever, is there any person not related to the EA community who quit or was fired from OpenAI for safety concerns?
Hi there! I would like to share with you my insights into what I think money does represent. My thesis is that other than the obvious function of giving us here and now the sense of security or by inflating our ego by having it, it's functionality in the present is quite limited. What I mean by that is: -I can check the balance account on my computer. In this scenario it's function is to "turn on specific pixels on the monitor". * If I was extremely rich I could build something from a physical pile of it. * I can also manipulate the banknote by putting it in my wallet or by paying with it. And it is this activity, the payment, which triggers the true function of money which is the representation of future possibilities. In other words money become a manifestation of future possibilities in a form of present only when we transfer it and receive something in return. What is even more interesting is that this function lies deeper than the psychological function mentioned before: If money didn't represent future possibilities we wouldn't feel safe by having it, money wouldn't have much of influence on our ego, other than maybe giving us joy of building a castle from pile of it. I found this insight interesting hence decided to post it here and as always I'm interested in your thoughts on it. Cheers!

Friday, 24 May 2024
Fri, 24 May 2024

Frontpage Posts

Quick takes

"UNICEF delivered over 43,000 doses of the R21/Matrix-M malaria vaccine by air to Bangui, Central African Republic, today, with more than 120,000 doses to follow in the next days. " - Link Pretty rookie numbers, need to scale. To be seen how this translates to actual distribution and acceptance. But sure did feel good to read the news, so thought I'd share! No takes yet, feel free to add. Also, "Around 4.33 million doses of RTS,S have been delivered to 8 countries so far – Benin, Burkina Faso, Cameroon, Ghana, Kenya, Liberia, Malawi, and Sierra Leone". 
I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet. My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it'll free me up to actually do more AI safety research? I'm uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
Would you agree that the general fear of the AI is a type of repressed and transferenced inherent fear of God? EDIT(to add more context): My thinking was as follows: Premises: A) We have an innate fear of things that we do not understand and that are beyond us, because it gave us an evolutionary advantage. B) We created God to personify this fear and thus have some control over it. C) We live in times when many people have intellectually denied belief in the supernatural, but the fear remains. D) A vision of AGI has emerged that has many features in common with the image of God. Conclusion: The general fear of Artificial Intelligence is largely a repressed and transferred fear of God

Thursday, 23 May 2024
Thu, 23 May 2024

Quick takes

I'll post some extracts from the commitments made at the Seoul Summit. I can't promise that this will be a particularly good summary, I was originally just writing this for myself, but maybe it's helpful until someone publishes something that's more polished: Frontier AI Safety Commitments, AI Seoul Summit 2024 The major AI companies have agreed to Frontier AI Safety Commitments. In particular, they will publish a safety framework focused on severe risks: "internal and external red-teaming of frontier AI models and systems for severe and novel threats; to work toward information sharing; to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights; to incentivize third-party discovery and reporting of issues and vulnerabilities; to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated; to publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use; to prioritize research on societal risks posed by frontier AI models and systems; and to develop and deploy frontier AI models and systems to help address the world’s greatest challenges" "Risk assessments should consider model capabilities and the context in which they are developed and deployed" - I'd argue that the context in which it is deployed should account take into account whether it is open or closed source/weights as open-source/weights can be subsequently modified. "They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk." - always great to make policy concrete" In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds." - Very important that when this is applied the ability to iterate on open-source/weight models is taken into account https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024 Seoul Declaration for safe, innovative and inclusive AI by participants attending the Leaders' Session Signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America. "We support existing and ongoing efforts of the participants to this Declaration to create or expand AI safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations" - guess we should now go full-throttle and push for the creation of national AI Safety institutes "We recognise the importance of interoperability between AI governance frameworks" - useful for arguing we should copy things that have been implemented overseas. "We recognize the particular responsibility of organizations developing and deploying frontier AI, and, in this regard, note the Frontier AI Safety Commitments." - Important as Frontier AI needs to be treated as different from regular AI.  https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-declaration-for-safe-innovative-and-inclusive-ai-by-participants-attending-the-leaders-session-ai-seoul-summit-21-may-2024 Seoul Statement of Intent toward International Cooperation on AI Safety Science Signed by the same countries. "We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems" - similar to what we listed above, but more specifically focused on AI Safety Institutes which is a great. "We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to AI safety" - Really good! We don't just want AIS Institutes to run current evaluation techniques on a bunch of models, but to be actively contributing to the development of AI safety as a science. "We articulate our shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety" - very important for them to share research among each other https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-statement-of-intent-toward-international-cooperation-on-ai-safety-science-ai-seoul-summit-2024-annex Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity Signed by: Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America, and the representative of the European Union "It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems and those that may be designed, developed, deployed and used in future" - considering future risks is a very basic, but core principle "Interpretability and explainability" - Happy to interpretability explicitly listed "Identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations" - important work, but could backfire if done poorly "Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors." - sensible, we need to ensure that the risks of open-sourcing and open-weight models are considered in terms of the 'deployment context' and 'foreseeable uses and misuses' "Assessing the risk posed by the design, development, deployment and use of frontier AI models or systems may involve defining and measuring model or system capabilities that could pose severe risks," - very pleased to see a focus beyond just deployment "We further recognise that such severe risks could be posed by the potential model or system capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission. We note the importance of gathering further empirical data with regard to the risks from frontier AI models or systems with highly advanced agentic capabilities, at the same time as we acknowledge the necessity of preventing the misuse or misalignment of such models or systems, including by working with organisations developing and deploying frontier AI to implement appropriate safeguards, such as the capacity for meaningful human oversight" - this is massive. There was a real risk that these issues were going to be ignored, but this is now seeming less likely. "We affirm the unique role of AI safety institutes and other relevant institutions to enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security." - "Unique role", this is even better! "We acknowledge the need to advance the science of AI safety and gather more empirical data with regard to certain risks, at the same time as we recognise the need to translate our collective understanding into empirically grounded, proactive measures with regard to capabilities that could result in severe risks. We plan to collaborate with the private sector, civil society and academia, to identify thresholds at which the level of risk posed by the design, development, deployment and use of frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France" - even better than above b/c it commits to a specific action and timeline https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024
A life saved in a rich country is generally considered more valuable than one saved in a poor country because the value of a statistical life (VSL) rises with wealth. However, transferring a dollar to a rich country is less beneficial than transferring a dollar to a poor country because marginal utility decreases as wealth increases. So, using [$ / lives saved] is the wrong approach. We should use [$ / (lives saved * VSL)] instead. This means GiveDirectly might be undervalued compared to other programs that save lives. Can someone confirm if this makes sense?
I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events.  https://www.insideaiwarfare.com/yann-versus/

Wednesday, 22 May 2024
Wed, 22 May 2024

Quick takes

Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  
I was reading the Charity Commission report on EV and came across this paragraph:  > During the inquiry the charity took the decision to reach a settlement agreement in relation to the repayment of funds it received from FTX in 2022. The charity made this decision following independent legal advice they had received. The charity then notified the Commission once this course of action had been taken. The charity returned $4,246,503.16 USD (stated as £3,340,021 in its Annual Report for financial year ending 30 June 2023). The Commission had no involvement in relation to the discussions and ultimate settlement agreement to repay the funds. This seems directly in conflict with the settlement agreement between EV and FTX which Zachary Robinson summarized as:  > First, we’re pleased to say that both Effective Ventures UK and Effective Ventures US have agreed to settlements with the FTX bankruptcy estate. As part of these settlements, EV US and EV UK (which I’ll collectively refer to as “EV”) have between them paid the estate $26,786,503, an amount equal to 100% of the funds the entities received from FTX and the FTX Foundation (which I’ll collectively refer to as “FTX”) in 2022. These two amounts hugely differ. My guess is this is because most of the FTX Funds were received by EV US and that wasn't included in the charity commission? But curious whether I am missing something. 
Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone - 1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians 2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.

Load more days