All posts

New & upvoted

Saturday, 25 May 2024
Sat, 25 May 2024

Frontpage Posts

Personal Blogposts

Quick takes

Besides Ilya Sutskever, is there any person not related to the EA community who quit or was fired from OpenAI for safety concerns?
Hi there! I would like to share with you my insights into what I think money does represent. My thesis is that other than the obvious function of giving us here and now the sense of security or by inflating our ego by having it, it's functionality in the present is quite limited. What I mean by that is: -I can check the balance account on my computer. In this scenario it's function is to "turn on specific pixels on the monitor". * If I was extremely rich I could build something from a physical pile of it. * I can also manipulate the banknote by putting it in my wallet or by paying with it. And it is this activity, the payment, which triggers the true function of money which is the representation of future possibilities. In other words money become a manifestation of future possibilities in a form of present only when we transfer it and receive something in return. What is even more interesting is that this function lies deeper than the psychological function mentioned before: If money didn't represent future possibilities we wouldn't feel safe by having it, money wouldn't have much of influence on our ego, other than maybe giving us joy of building a castle from pile of it. I found this insight interesting hence decided to post it here and as always I'm interested in your thoughts on it. Cheers!

Friday, 24 May 2024
Fri, 24 May 2024

Frontpage Posts

Personal Blogposts

Quick takes

"UNICEF delivered over 43,000 doses of the R21/Matrix-M malaria vaccine by air to Bangui, Central African Republic, today, with more than 120,000 doses to follow in the next days. " - Link Pretty rookie numbers, need to scale. To be seen how this translates to actual distribution and acceptance. But sure did feel good to read the news, so thought I'd share! No takes yet, feel free to add. Also, "Around 4.33 million doses of RTS,S have been delivered to 8 countries so far – Benin, Burkina Faso, Cameroon, Ghana, Kenya, Liberia, Malawi, and Sierra Leone". 
I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet. My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it'll free me up to actually do more AI safety research? I'm uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
Would you agree that the general fear of the AI is a type of repressed and transferenced inherent fear of God? EDIT(to add more context): My thinking was as follows: Premises: A) We have an innate fear of things that we do not understand and that are beyond us, because it gave us an evolutionary advantage. B) We created God to personify this fear and thus have some control over it. C) We live in times when many people have intellectually denied belief in the supernatural, but the fear remains. D) A vision of AGI has emerged that has many features in common with the image of God. Conclusion: The general fear of Artificial Intelligence is largely a repressed and transferred fear of God

Thursday, 23 May 2024
Thu, 23 May 2024

Frontpage Posts

Personal Blogposts

2
[Event]
· · 1m read

Quick takes

I'll post some extracts from the commitments made at the Seoul Summit. I can't promise that this will be a particularly good summary, I was originally just writing this for myself, but maybe it's helpful until someone publishes something that's more polished: Frontier AI Safety Commitments, AI Seoul Summit 2024 The major AI companies have agreed to Frontier AI Safety Commitments. In particular, they will publish a safety framework focused on severe risks: "internal and external red-teaming of frontier AI models and systems for severe and novel threats; to work toward information sharing; to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights; to incentivize third-party discovery and reporting of issues and vulnerabilities; to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated; to publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use; to prioritize research on societal risks posed by frontier AI models and systems; and to develop and deploy frontier AI models and systems to help address the world’s greatest challenges" "Risk assessments should consider model capabilities and the context in which they are developed and deployed" - I'd argue that the context in which it is deployed should account take into account whether it is open or closed source/weights as open-source/weights can be subsequently modified. "They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk." - always great to make policy concrete" In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds." - Very important that when this is applied the ability to iterate on open-source/weight models is taken into account https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024 Seoul Declaration for safe, innovative and inclusive AI by participants attending the Leaders' Session Signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America. "We support existing and ongoing efforts of the participants to this Declaration to create or expand AI safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations" - guess we should now go full-throttle and push for the creation of national AI Safety institutes "We recognise the importance of interoperability between AI governance frameworks" - useful for arguing we should copy things that have been implemented overseas. "We recognize the particular responsibility of organizations developing and deploying frontier AI, and, in this regard, note the Frontier AI Safety Commitments." - Important as Frontier AI needs to be treated as different from regular AI.  https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-declaration-for-safe-innovative-and-inclusive-ai-by-participants-attending-the-leaders-session-ai-seoul-summit-21-may-2024 Seoul Statement of Intent toward International Cooperation on AI Safety Science Signed by the same countries. "We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems" - similar to what we listed above, but more specifically focused on AI Safety Institutes which is a great. "We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to AI safety" - Really good! We don't just want AIS Institutes to run current evaluation techniques on a bunch of models, but to be actively contributing to the development of AI safety as a science. "We articulate our shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety" - very important for them to share research among each other https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-statement-of-intent-toward-international-cooperation-on-ai-safety-science-ai-seoul-summit-2024-annex Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity Signed by: Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America, and the representative of the European Union "It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems and those that may be designed, developed, deployed and used in future" - considering future risks is a very basic, but core principle "Interpretability and explainability" - Happy to interpretability explicitly listed "Identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations" - important work, but could backfire if done poorly "Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors." - sensible, we need to ensure that the risks of open-sourcing and open-weight models are considered in terms of the 'deployment context' and 'foreseeable uses and misuses' "Assessing the risk posed by the design, development, deployment and use of frontier AI models or systems may involve defining and measuring model or system capabilities that could pose severe risks," - very pleased to see a focus beyond just deployment "We further recognise that such severe risks could be posed by the potential model or system capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission. We note the importance of gathering further empirical data with regard to the risks from frontier AI models or systems with highly advanced agentic capabilities, at the same time as we acknowledge the necessity of preventing the misuse or misalignment of such models or systems, including by working with organisations developing and deploying frontier AI to implement appropriate safeguards, such as the capacity for meaningful human oversight" - this is massive. There was a real risk that these issues were going to be ignored, but this is now seeming less likely. "We affirm the unique role of AI safety institutes and other relevant institutions to enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security." - "Unique role", this is even better! "We acknowledge the need to advance the science of AI safety and gather more empirical data with regard to certain risks, at the same time as we recognise the need to translate our collective understanding into empirically grounded, proactive measures with regard to capabilities that could result in severe risks. We plan to collaborate with the private sector, civil society and academia, to identify thresholds at which the level of risk posed by the design, development, deployment and use of frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France" - even better than above b/c it commits to a specific action and timeline https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024
A life saved in a rich country is generally considered more valuable than one saved in a poor country because the value of a statistical life (VSL) rises with wealth. However, transferring a dollar to a rich country is less beneficial than transferring a dollar to a poor country because marginal utility decreases as wealth increases. So, using [$ / lives saved] is the wrong approach. We should use [$ / (lives saved * VSL)] instead. This means GiveDirectly might be undervalued compared to other programs that save lives. Can someone confirm if this makes sense?
I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events.  https://www.insideaiwarfare.com/yann-versus/

Topic Page Edits and Discussion

Wednesday, 22 May 2024
Wed, 22 May 2024

Frontpage Posts

Quick takes

Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  
I was reading the Charity Commission report on EV and came across this paragraph:  > During the inquiry the charity took the decision to reach a settlement agreement in relation to the repayment of funds it received from FTX in 2022. The charity made this decision following independent legal advice they had received. The charity then notified the Commission once this course of action had been taken. The charity returned $4,246,503.16 USD (stated as £3,340,021 in its Annual Report for financial year ending 30 June 2023). The Commission had no involvement in relation to the discussions and ultimate settlement agreement to repay the funds. This seems directly in conflict with the settlement agreement between EV and FTX which Zachary Robinson summarized as:  > First, we’re pleased to say that both Effective Ventures UK and Effective Ventures US have agreed to settlements with the FTX bankruptcy estate. As part of these settlements, EV US and EV UK (which I’ll collectively refer to as “EV”) have between them paid the estate $26,786,503, an amount equal to 100% of the funds the entities received from FTX and the FTX Foundation (which I’ll collectively refer to as “FTX”) in 2022. These two amounts hugely differ. My guess is this is because most of the FTX Funds were received by EV US and that wasn't included in the charity commission? But curious whether I am missing something. 
Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone - 1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians 2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.

Topic Page Edits and Discussion

Tuesday, 21 May 2024
Tue, 21 May 2024

Frontpage Posts

Quick takes

I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs. * How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later. * Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups: 1. Non-EAs 2. Organisers 3. Existing members of the community Per target group, I'd say it has the following main activities: * Targeting non-EAs, it does comms and education (the VP programme). * Targeting organisers, you have the work of the groups team. * Targeting existing members, you have the events team, the forum team, and community health.  Per target group, these activities are aiming for the following short-term outcomes: * Targeting non-EAs, it doesn't aim to raise awareness of EA, but instead, it aims to ensure people have an accurate understanding of what EA is. * Targeting organisers, it aims to improve their ability to organise. * Targeting existing members, it aims to improve information flow (through EAG(x) events, the forum, newsletters, etc.) and maintain a healthy culture (through community health work). If you're interested, you can see EA Netherland's theory of change here. 
In food ingredient labeling, some food items do not require expending. E.g, Article 19 from the relevant EU regulation: > 1. The following foods shall not be required to bear a list of ingredients: > 1. fresh fruit and vegetables, including potatoes, which have not been peeled, cut or similarly treated; > 2. carbonated water, the description of which indicates that it has been carbonated; > 3. fermentation vinegars derived exclusively from a single basic product, provided that no other ingredient has been added; > 4. cheese, butter, fermented milk and cream, to which no ingredient has been added other than lactic products, food enzymes and micro-organism cultures essential to manufacture, or in the case of cheese other than fresh cheese and processed cheese the salt needed for its manufacture; > 5. foods consisting of a single ingredient, where: > 1. the name of the food is identical to the ingredient name; or > 2. the name of the food enables the nature of the ingredient to be clearly identified. An interesting regulatory intervention to promote replacement of animal products could be to either require expansion of the details on these animal products (seems unlikely, but may be possible to push from a health perspective) or to also similarly exempt key alt proteins. fyi: @vicky_cox 
Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation. Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance. Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less that week, 25-75% of the benefit rate if you worked 11-30 hours, and 0% if you worked more than 30 hours.[1] New York's definition of work is really broad and includes "any activity that brings in or may bring in income at any time must be reported as work... even if you were not paid". Specifically, "A working interview, where a prospective employer asks you to work - with or without pay - to demonstrate that you can do the job" is considered work.[1] Depending on the details of the work test, it may or may not count as work under your state's rules, meaning that if it is unpaid, you are losing money by doing it. If so, consider asking for remuneration for the time you spend on the work test to offset the unemployment money you'd be giving up by doing it. Note, however, that getting paid may also reduce the amount of unemployment benefits you are eligible for (though not necessarily dollar for dollar). 1. ^ Unemployment Insurance Claimant Handbook. NYS Department of Labor, pp. 20-21.

Monday, 20 May 2024
Mon, 20 May 2024

Frontpage Posts

26
adekcz
· · 3m read

Quick takes

50
Linch
6d
8
Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else. I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future. To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly sane and reasonable in at least a limited form as a way to prevent leaking trade secrets, and non-disparagement agreements, which prevents you from saying bad things about past employers. The latter seems clearly bad to have for anybody in a position to affect policy. Doubly so if the existence of the non-disparagement agreement itself is secretive.
Draft guidelines for new topic tags (feedback welcome) Topics (AKA wiki pages[1] or tags[2]) are used to organise Forum posts into useful groupings. They can be used to give readers context on a debate that happens only intermittently (see Time of Perils), collect news and events which might interest people in a certain region (see Greater New York City Area), collect the posts by an organisation, or, perhaps most importantly, collect all the posts on a particular subject (see Prediction Markets).  Any user can submit and begin using a topic. They can do this most easily by clicking “Add topic” on the topic line at the top of any post. However, before being permanently added to our list of topics, all topics are vetted by the Forum facilitation team. This quick take outlines some requirements and suggestions for new topics to make this more transparent. Similar, more polished, advice will soon be available on the 'add topic' page. Please give feedback if you disagree with any of these requirements.  When you add a new topic, ensure that: 1. The topic, or a very similar topic, does not already exist. If a very similar topic already exists, consider adding detail to that topic wiki page rather than creating a new topic.  2. You have used your topic to tag at least three posts by different authors (not including yourself). You will have to do this after creating the topic. The topic must describe a central theme in each post. If you cannot yet tag three relevant posts, the Forum probably doesn’t need this topic yet.  3. You’ve added at least a couple of sentences to define the term and explain how the topic tag should be used.    Not fulfilling these requirements is the most likely cause of a topic rejection. In particular, many topics are written with the aim of establishing a new term or idea, rather than collecting terms and ideas which already exist on the Forum. Other examples of rejected topics include: * Topic pages created for an individual. In certain cases, we permit these tags, for example, if the person is associated with a philosophy or set of ideas that is often discussed (see Peter Singer) and which can be clearly picked out by their name. However, in most cases, we don’t want tags for individuals because there would be far too many, and posts about individuals can generally be found through search without using tags. * Topics which are applicable to posts on the EA Forum, but which aren’t used by Forum users. For example, many posts could technically be described as “Risk Management”. However, EA forum users use other terms to refer to risk management content. 1. ^ Technically there can be a wiki page without a topic tag, i.e. a wiki page that cannot be applied to a post. However we don’t really use these, so in practice the terms are interchangeable. 2. ^ This term is used more informally. It is easier to say “I’m tagging this post” than “I’m topic-ing this post”
I spent way too much time organizing my thoughts on AI loss-of-control ("x-risk") debates without any feedback today, so I'm publishing perhaps one of my favorite snippets/threads: A lot of debates seem to boil down to under-acknowledged and poorly-framed disagreements about questions like “who bears the burden of proof.” For example, some skeptics say “extraordinary claims require extraordinary evidence” when dismissing claims that the risk is merely “above 1%”, whereas safetyists argue that having >99% confidence that things won’t go wrong is the “extraordinary claim that requires extraordinary evidence.”  I think that talking about “burdens” might be unproductive. Instead, it may be better to frame the question more like “what should we assume by default, in the absence of definitive ‘evidence’ or arguments, and why?” “Burden” language is super fuzzy (and seems a bit morally charged), whereas this framing at least forces people to acknowledge that some default assumptions are being made and consider why.  To address that framing, I think it’s better to ask/answer questions like “What reference class does ‘building AGI’ belong to, and what are the base rates of danger for that reference class?” This framing at least pushes people to make explicit claims about what reference class building AGI belongs to, which should make it clearer that it doesn’t belong in your “all technologies ever” reference class.  In my view, the "default" estimate should not be “roughly zero until proven otherwise,” especially given that there isn’t consensus among experts and the overarching narrative of “intelligence proved really powerful in humans, misalignment even among humans is quite common (and is already often observed in existing models), and we often don’t get technologies right on the first few tries.”
Working questions A mental technique I’ve been starting to use recently: “working questions.” When tackling a fuzzy concept, I’ve heard of people using “working definitions” and “working hypotheses.” Those terms help you move forward on understanding a problem without locking yourself into a frame, allowing you to focus on other parts of your investigation. Often, it seems to me, I know I want to investigate a problem without being quite clear on what exactly I want to investigate. And the exact question I want to answer is quite important! And instead of needing to be precise about the question from the beginning, I’ve found it helpful to think about a “working question” that I’ll then refine into a more precise question as I move forward. An example: “something about the EA Forum’s brand/reputation” -> “What do potential writers think about the costs and benefits of posting on the Forum?” -> “Do writers think they will reach a substantial fraction of the people they want to reach, if they post on the EA Forum?”
I find it encouraging that EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies (c.f. Why Not Slow AI Progress?). Previously, I worried that social/professional entanglements and image concerns would lead EAs to align with AI companies even after receiving clear signals that AI companies are not interested in safety. I'm glad to have been wrong about that. Caveat: we've only seen this kind of scrutiny applied to OpenAI and it remains to be seen whether Anthropic and DeepMind will get the same scrutiny.

Saturday, 18 May 2024
Sat, 18 May 2024

Personal Blogposts

Quick takes

I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22. Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.
Most possible goals for AI systems are concerned with process as well as outcomes. People talking about possible AI goals sometimes seem to assume something like "most goals are basically about outcomes, not how you get there". I'm not entirely sure where this idea comes from, and I think it's wrong. The space of goals which are allowed to be concerned with process is much higher-dimensional than the space of goals which are just about outcomes, so I'd expect that on most reasonable sense of "most" process can have a look-in. What's the interaction with instrumental convergence? (I'm asking because vibe-wise it seems like instrumental convergence is associated with an assumption that goals won't be concerned with process.) * Process-concerned goals could undermine instrumental convergence (since some process-concerned goals could be fundamentally opposed to some of the things that would otherwise get converged-to), but many process-concerned goals won't * Since instrumental convergence is basically about power-seeking, there's an evolutionary argument that you should expect the systems which end up with most power to have the power-seeking behaviours * I actually think there are a couple of ways for this argument to fail: 1. If at some point you get a singleton, there's now no evolutionary pressure on its goals (beyond some minimum required to stay a singleton) 2. A social environment can punish power-seeking, so that power-seeking behaviour is not the most effective way to arrive at power * (There are some complications to this I won't get into here) * But even if it doesn't fail, it pushes towards things which have Omuhundro's basic AI drives (and so pushes away from process-concerned goals which could preclude those), but it doesn't push all the way to purely outcome-concerned goals In general I strongly expect humans to try to instil goals that are concerned with process as well as outcomes. Even if that goes wrong, I mostly expect them to end up something which has incorrect preferences about process, not something that doesn't care about process. How could you get to purely outcome-concerned goals? I basically think this should be expected just if someone makes a deliberate choice to aim for that (though that might be possible via self-modification; the set of goals that would choose to self-modify to be purely outcome-concerned may be significantly bigger than the set of purely outcome-concerned goals). Overall I think purely outcome-concerned goals (or almost purely outcome-concerned goals) are a concern, and worth further consideration, but I really don't think they should be treated as a default.
Are there currently any safety-conscious people on the OpenAI Board?
In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)? Isn't it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data? Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldn’t this tailored approach make promoting EA among friends and acquaintances more effective and engaging?
Swapcard tips: 1. The mobile browser is more reliable than the app You can use Firefox/Safari/Chrome etc. on your phone, go to swapcard.com and use that instead of downloading the Swapcard app from your app store. As far as I know, the only thing the app has that the mobile site does not, is the QR code that you need when signing in when you first get to the venue and pick up your badge 2. Only what you put in the 'Biography' section in the 'About Me' section of your profile is searchable when searching in Swapcard The other fields, like 'How can I help others' and 'How can others help me' appear when you view someone's profile, but will not be used when searching using Swapcard search. This is another reason to use the Swapcard Attendee Google sheet that is linked-to in Swapcard to search 3. You can use a (local!) LLM to find people to connect with People might not want their data uploaded to a commercial large language model, but if you can run an open-source LLM locally, you can upload the Attendee Google sheet and use it to help you find useful contacts

Load more days