All posts

New & upvoted

Saturday, 24 February 2024
Sat, 24 Feb 2024

Frontpage Posts

Quick takes

In some circles that I frequent, I've gotten the impression that a decent fraction of existing rhetoric around AI has gotten pretty emotionally charged. And I'm worried about the presence of what I perceive as demagoguery regarding the merits of AI capabilities and AI safety. Out of a desire to avoid calling out specific people or statements, I'll just discuss a hypothetical example for now. Suppose an EA says, "I'm against OpenAI's strategy for straightforward reasons: OpenAI is selfishly gambling everyone's life in a dark gamble to make themselves immortal." Would this be a true, non-misleading statement? Would this statement likely convey the speaker's genuine beliefs about why they think OpenAI's strategy is bad for the world? To begin to answer these questions, we can consider the following observations: 1. It seems likely that AI powerful enough to end the world would presumably also be powerful enough to do lots of incredibly positive things, such as reducing global mortality and curing diseases. By delaying AI, we are therefore equally "gambling everyone's life" by forcing people to face ordinary mortality. 2. Selfish motives can be, and frequently are, aligned with the public interest. For example, Jeff Bezos was very likely motivated by selfish desires in his accumulation of wealth, but building Amazon nonetheless benefitted millions of people in the process. Such win-win situations are common in business, especially when developing technologies. Because of the potential for AI to both pose great risks and great benefits, it seems to me that there are plenty of plausible pro-social arguments one can give for favoring OpenAI's strategy of pushing forward with AI. Therefore, it seems pretty misleading to me to frame their mission as a dark and selfish gamble, at least on a first impression. Here's my point: Depending on the speaker, I frequently think their actual reason for being against OpenAI's strategy is not because they think OpenAI is undertaking a dark, selfish gamble. Instead, it's often just standard strong longtermism. A less misleading statement of their view would go something like this: "I'm against OpenAI's strategy because I think potential future generations matter more than the current generation of people, and OpenAI is endangering future generations in their gamble to improve the lives of people who currently exist." I claim this statement would—at least in many cases—be less misleading than the other statement because it captures a major genuine crux of the disagreement: whether you think potential future generations matter more than currently-existing people. This statement also omits the "selfish" accusation, which I think is often just a red herring designed to mislead people: we don't normally accuse someone of being selfish when they do a good thing, even if the accusation is literally true. (There can, of course, be further cruxes, such as your p(doom), your timelines, your beliefs about the normative value of unaligned AIs, and so on. But at the very least, a longtermist preference for future generations over currently existing people seems like a huge, actual crux that many people have in this debate, when they work through these things carefully together.) Here's why I care about discussing this. I admit that I care a substantial amount—not overwhelming, but it's hardly insignificant—about currently existing people. I want to see people around me live long, healthy and prosperous lives, and I don't want to see them die. And indeed, I think advancing AI could greatly help currently existing people. As a result, I find it pretty frustrating to see people use what I perceive to be essentially demagogic tactics designed to sway people against AI, rather than plainly stating their cruxes about why they actually favor the policies they do.  These allegedly demagogic tactics include: 1. Highlighting the risks of AI to argue against development while systematically omitting the potential benefits, hiding a more comprehensive assessment of your preferred policies. 2. Highlighting random, extraneous drawbacks of AI development that you wouldn't ordinarily care much about in other contexts when discussing innovation, such as potential for job losses from automation. This type of rhetoric looks a lot like "deceptively searching for random arguments designed to persuade, rather than honestly explain one's perspective" to me, a lot of the time. 3. Conflating, or at least strongly associating, the selfish motives of people who work at AI firms with their allegedly harmful effects. This rhetoric plays on public prejudices by appealing to a widespread but false belief that selfish motives are usually suspicious, or can't translate into pro-social results. In fact, there is no contradiction with the idea that most people at OpenAI are in it for the money, status, and fame, but also what they're doing is good for the world, and they genuinely believe that. I'm against these tactics for a variety of reasons, but one of the biggest reasons is that they can, in some cases, indicate a degree of dishonesty, depending on the context. And I'd really prefer EAs to focus on trying to be almost-maximally truth-seeking in both their beliefs and their words. Speaking more generally—to drive one of my points home a little more—I think there are roughly three possible views you could have about pushing for AI capabilities relative to pushing for pausing or more caution: 1. Full-steam ahead view: We should accelerate AI at any and all costs. We should oppose any regulations that might impede AI capabilities, and embark on a massive spending spree to accelerate AI capabilities. 2. Full-safety view: We should try as hard as possible to shut down AI right now, and thwart any attempt to develop AI capabilities further, while simultaneously embarking on a massive spending spree to accelerate AI safety. 3. Balanced view: We should support a substantial mix of both safety and acceleration efforts, attempting to carefully balance the risks and rewards of AI development to ensure that we can seize the benefits of AI without bearing intolerably high costs. I tend to think most informed people, when pushed, advocate the third view, albeit with wide disagreement about the right mix of support for safety and acceleration. Yet, on a superficial level—on the level of rhetoric—I find that the first and second view are surprisingly common. On this level, I tend to find e/accs in the first camp, and a large fraction of EAs in the second camp. But if your actual beliefs are something like the third view, I think that's an important fact to emphasize in honest discussions about what we should do with AI. If your rhetoric is consistently aligned with (1) or (2) but your actual beliefs are aligned with (3), I think that can often be misleading. And it can be especially misleading if you're trying to publicly paint other people in the same camp—the third one—as somehow having bad motives merely because they advocate a moderately higher mix of acceleration over safety efforts than you do, or vice versa.
Some people seem to think the risk from AI comes from AIs gaining dangerous capabilities, like situational awareness. I don't really agree. I view the main risk as simply arising from the fact that AIs will be increasingly integrated into our world, diminishing human control. Under my view, the most important thing is whether AIs will be capable of automating economically valuable tasks, since this will prompt people to adopt AIs widely to automate labor. If AIs have situational awareness, but aren't economically important, that's not as concerning. The risk is not so much that AIs will suddenly and unexpectedly take control of the world. It's that we will voluntarily hand over control to them anyway, and we want to make sure this handoff is handled responsibly.  An untimely coup, while possible, is not necessary.

Thursday, 22 February 2024
Thu, 22 Feb 2024

Frontpage Posts

Quick takes

Pandemic Prevention: All Nations Should Build Emergency Medical Stockpiles All nations should have stockpiles of medical resources e.g. masks, PPE, multi-purpose medicines and therapeutics, and various vaccines (smallpox, H1N1, etc). At the slightest hint of danger, these resources should be distributed to every part of the country. There should be enough stock to protect the people for as long as is required to get resupplied. The Australians have a national medical stockpile and they started distributing masks from it in January 2020 in response to the Covid-19 outbreak. The French used to have a national stockpile of masks, but they decided it would be more ‘efficient’ to get rid of it, reasoning that if there was an emergency, they could just buy masks from China. Sacre bleu! Stockpiling is a general-purpose risk management technique which also works for other emergencies such as terrorism, fires, nuclear fallout, and war. If you want to survive in the long-run, you need to build stockpiles!
A potential failure mode of 80k recommending EAs work at AI labs: 1. 80k promotes a safety related job within a leading AI lab. 2. 80k's audience (purposefully) skews to high prospect candidates (HPC) - smarter, richer, better connected vs average.  3. HPC applies for and gets safety role within AI lab. 4. HPC candidate stays at the lab but moves roles.  5. Now we have a smart, rich, well connected person no longer in safety but in capabilities. I think this is sufficiently important / likely that 80k should consider tracking these people over time to see if this is a real issue.
Ambition is like fire. Too little and you go cold. But unmanaged it leaves you burnt.

Topic Page Edits and Discussion

Wednesday, 21 February 2024
Wed, 21 Feb 2024

Frontpage Posts

Quick takes

Two sources of human misalignment that may resist a long reflection: malevolence and ideological fanaticism (Alternative title: Some bad human values may resist idealization[1]) The values of some humans, even if idealized (e.g., during some form of long reflection), may be incompatible with an excellent future. Thus, solving AI alignment will not necessarily lead to utopia. Others have raised similar concerns before.[2] Joe Carlsmith puts it especially well in the post “An even deeper atheism”: > “And now, of course, the question arises: how different, exactly, are human hearts from each other? And in particular: are they sufficiently different that, when they foom, and even "on reflection," they don't end up pointing in exactly the same direction? After all, Yudkowsky said, above, that in order for the future to be non-trivially "of worth," human hearts have to be in the driver's seat. But even setting aside the insult, here, to the dolphins, bonobos, nearest grabby aliens, and so on – still, that's only to specify a necessary condition. Presumably, though, it's not a sufficient condition? Presumably some human hearts would be bad drivers, too? Like, I dunno, Stalin?” What makes human hearts bad?  What, exactly, makes some human hearts bad drivers? If we better understood what makes hearts go bad, perhaps we could figure out how to make bad hearts good or at least learn how to prevent hearts from going bad. It would also allow us better spot potentially bad hearts and coordinate our efforts to prevent them from taking the driving seat. As of now, I’m most worried about malevolent personality traits and fanatical ideologies.[3] Malevolence: dangerous personality traits Some human hearts may be corrupted due to elevated malevolent traits like psychopathy, sadism, narcissism, Machiavellianism, or spitefulness. Ideological fanaticism: dangerous belief systems There are many suitable definitions of “ideological fanaticism”. Whatever definition we are going to use, it should describe ideologies that have caused immense harm historically, such as fascism (Germany under Hitler, Italy under Mussolini), (extreme) communism (the Soviet Union under Stalin, China under Mao), religious fundamentalism (ISIS, the Inquisition), and most cults.  See this footnote[4] for a preliminary list of defining characteristics. Malevolence and fanaticism seem especially dangerous Of course, there are other factors that could corrupt our hearts or driving ability. For example, cognitive biases, limited cognitive ability, philosophical confusions, or plain old selfishness.[5] I’m most concerned about malevolence and ideological fanaticism for two reasons.    Deliberately resisting reflection and idealization First, malevolence—if reflectively endorsed[6]—and fanatical ideologies deliberately resist being changed and would thus plausibly resist idealization even during a long reflection. The most central characteristic of fanatical ideologies is arguably that they explicitly forbid criticism, questioning, and belief change and view doubters and disagreement as evil.  Putting positive value on creating harm Second, malevolence and ideological fanaticism would not only result in the future not being as good as it possibly could—they might actively steer the future in bad directions and, for instance, result in astronomical amounts of suffering.  The preferences of malevolent humans (e.g., sadists) may be such that they intrinsically enjoy inflicting suffering on others. Similarly, many fanatical ideologies sympathize with excessive retributivism and often demonize the outgroup. Enabled by future technology, preferences for inflicting suffering on the outgroup may result in enormous disvalue—cf. concentration camps, the Gulag, or hell[7]. In the future, I hope to write more about all of this, especially long-term risks from ideological fanaticism.  Thanks to Pablo and Ruairi for comments and valuable discussions.  1. ^ “Human misalignment” is arguably a confusing (and perhaps confused) term. But it sounds more sophisticated than “bad human values”.  2. ^ For example, Matthew Barnett in “AI alignment shouldn't be conflated with AI moral achievement”, Geoffrey Miller in “AI alignment with humans... but with which humans?”, lc in “Aligned AI is dual use technology”. Pablo Stafforini has called this the “third alignment problem”. And of course, Yudkowsky’s concept of CEV is meant to address these issues.  3. ^ These factors may not be clearly separable. Some humans may be more attracted to fanatical ideologies due to their psychological traits and malevolent humans are often leading fanatical ideologies. Also, believing and following a fanatical ideology may not be good for your heart. 4. ^ Below are some typical characteristics (I’m no expert in this area): Unquestioning belief, absolute certainty and rigid adherence. The principles and beliefs of the ideology are seen as absolute truth and questioning or critical examination is forbidden. Inflexibility and refusal to compromise.  Intolerance and hostility towards dissent. Anyone who disagrees or challenges the ideology is seen as evil; as enemies, traitors, or heretics. Ingroup superiority and outgroup demonization. The in-group is viewed as superior, chosen, or enlightened. The out-group is often demonized and blamed for the world's problems.  Authoritarianism. Fanatical ideologies often endorse (or even require) a strong, centralized authority to enforce their principles and suppress opposition, potentially culminating in dictatorship or totalitarianism. Militancy and willingness to use violence. Utopian vision. Many fanatical ideologies are driven by a vision of a perfect future or afterlife which can only be achieved through strict adherence to the ideology. This utopian vision often justifies extreme measures in the present.  Use of propaganda and censorship.  5. ^ For example, Barnett argues that future technology will be primarily used to satisfy economic consumption (aka selfish desires). That seems even plausible to me, however, I’m not that concerned about this causing huge amounts of future suffering (at least compared to other s-risks). It seems to me that most humans place non-trivial value on the welfare of (neutral) others such as animals. Right now, this preference (for most people) isn’t strong enough to outweigh the selfish benefits of eating meat. However, I’m relatively hopeful that future technology would make such types of tradeoffs much less costly.  6. ^ Some people (how many?) with elevated malevolent traits don’t reflectively endorse their malevolent urges and would change them if they could. However, some of them do reflectively endorse their malevolent preferences and view empathy as weakness.  7. ^ Some quotes from famous Christian theologians:  Thomas Aquinas:  "the blessed will rejoice in the punishment of the wicked." "In order that the happiness of the saints may be more delightful to them and that they may render more copious thanks to God for it, they are allowed to see perfectly the sufferings of the damned".  Samuel Hopkins:  "Should the fire of this eternal punishment cease, it would in a great measure obscure the light of heaven, and put an end to a great part of the happiness and glory of the blessed.” Jonathan Edwards:  "The sight of hell torments will exalt the happiness of the saints forever."
I have written 7 emails to 7 Politicians aiming to meet them to discuss AI Safety, and already have 2 meetings. Normally, I'd put this kind of post on twitter, but I'm not on twitter, so it is here instead. I just want people to know that if they're worried about AI Safety, believe more government engagement is a good thing and can hold a decent conversation (i.e. you understand the issue and are a good verbal/written communicator), then this could be an underrated path to high impact. Another thing that is great about it is you can choose how many emails to send and how many meetings to have. So it can be done on the side of a "day job".

Topic Page Edits and Discussion

Tuesday, 20 February 2024
Tue, 20 Feb 2024

Quick takes

Mini EA Forum Update We’ve updated our new user onboarding flow! You can see more details in GitHub here. In addition to making it way prettier, we’re trying out adding some optional steps, including: 1. You can select topics you’re interested in, to make your frontpage more relevant to you. 1. You can also click the “Customize feed” button on the frontpage - see details here. 2. You can choose some authors to subscribe to. You will be notified when an author you are subscribed to publishes a post. 1. You can also subscribe from any user’s profile page. 3. You’re prompted to fill in some profile information to give other users context on who you are. 1. You can also edit your profile here. I hope that these additional optional steps help new users get more out of the Forum. We will continue to iterate on this flow based on usage and feedback - feel free to reply to this quick take with your thoughts!
Stand-up comedian in San Francisco spars with ChatGPT AI developers in the audience https://youtu.be/MJ3E-2tmC60

Topic Page Edits and Discussion

Monday, 19 February 2024
Mon, 19 Feb 2024

Personal Blogposts

Quick takes

46
Linch
6d
15
My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story: 1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP. 2. Takeoff speeds (from the perspective of the State) are relatively slow. 3. Timelines are moderate to long (after 2030 say).  If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently overestimating the role of values or insitutional processes of labs, or the value of getting gov'ts to intervene(since the default outcome is that they'd intervene anyway). Conversely, we may be underestimating the value of clear conversations about AI that government actors or the general public can easily understand (since if they'll intervene anyway, we want the interventions to be good). More speculatively, we may also be underestimating the value of making sure 2-3 are true (if you share my belief that gov't actors will broadly be more responsible than the existing corporate actors). Happy to elaborate if this is interesting.
5
Linch
6d
0
One perspective that I (and I think many other people in the AI Safety space) have is that AI Safety people's "main job" so to speak is to safely hand off the reins to our value-aligned weakly superintelligent AI successors. This involves: a) Making sure the transition itself goes smoothly and b) Making sure that the first few generations of our superhuman AI successors are value-aligned with goals that we broadly endorse.  Importantly, this likely means that the details of the first superhuman AIs we make are critically important. We may not be able to, or need to, solve technical alignment or strategy in the general case. What matters most is that our first* successors are broadly aligned with our goals (as well as other desiderata).  At least for me, I think an implicit assumption of this model is that humans will have to hand off the reins anyway. Whether by choice or by force. Without fast takeoffs, it's hard to imagine that the transition to vastly superhuman AI will primarily be brought about or even overseen by humans, as opposed to nearly-vastly superhuman AI successors. Unless we don't build AGI, of course. *In reality, may be several generations, I imagine the first iterations of weakly superhuman AIs will make decisions alongside humans, and we may still wish to cling to relevance maintain some level of human-in-the-loop oversight for a while longer, even after AIs are objectively smarter than us in every way.
Frances' quick take here made me think about what skills are particularly important in my own line of work, communications. 80,000 Hours has a skill profile for communicating ideas that covers some crucial considerations when assessing fit for communications work; these are additional skills or aptitudes that I often think about when considering fit for communications work in the EA ecosystem in particular: 1. Translating between groups: Especially in an EA context, communications work can entail the translation of complex, nuanced ideas from one group of people into something more legible for a different group or audience. Being able to speak the language of different niche groups—like researchers or philosophers—and then being able to translate that into a different kind of language or format proves useful, especially when communicating with audiences that are less familiar with EA. This is when having a background or understanding of different audiences or groups can come in handy for communications work. 2. Stewardship mentality: As a communicator, you don’t always represent your own ideas or original work. Often you’re representing the work or ideas of others, which requires a sense of stewardship in service of representing that work or those ideas accurately and with nuance. This can look like double-checking stats or numbers before sharing a social media post or doing further research to make sure you understand a claim or piece of research you’re discussing. 3. Excitement about being in a support role: Some communicators, like social media personalities or popular bloggers, don’t necessarily require this aptitude; but full-time communications roles at many organizations in the EA ecosystem require this, in my opinion. Similar to having a stewardship mentality, I find it helps if you have excitement about supporting the object-level work of others. Feeling jazzed about the message or impact of a particular organization or cause area probably means you’ll do a better job communicating about it or supporting the communication efforts of others. Many types of communications work don’t receive direct, public credit—editing, ghostwriting, coordinating, filming, etc.—but they can be just as rewarding depending on your personality.
Last week, we helped facilitate a Digital Platform Coordination call to start conversations between members of the animal advocacy movement and see where work might intersect. If anyone is involved in digital platforms, either using existing solutions or building your own, feel to join the conversation on Slack as we continue to coordinate & share info. * Recap on the FAST Forum: https://forum.fastcommunity.org/posts/eThKNEZyFe8SWbwRy/digital-platform-coordination-and-call-for-participation * Join the IAA Slack: https://tally.so/r/wkGKer * Link to IAA Slack #s-platforms channel: https://impactfulanimal.slack.com/archives/C06BYAYFHRV

Sunday, 18 February 2024
Sun, 18 Feb 2024

Quick takes

Y-Combinator wants to fund Mechanistic Interpretability startups "Understanding model behavior is very challenging, but we believe that in contexts where trust is paramount it is essential for an AI model to be interpretable. Its responses need to be explainable. For society to reap the full benefits of AI, more work needs to be done on explainable AI. We are interested in funding people building new interpretable models or tools to explain the output of existing models." Link https://www.ycombinator.com/rfs (Scroll to 12) What they look for in startup founders https://www.ycombinator.com/library/64-what-makes-great-founders-stand-out
Don't forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense. For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sounds unethical to me but technically it's legal and not a breach of contract. Going further, what if you started a company, like a food delivery app, that hired contractors to do the important work and paid them subminimum wages[1], forcing them to rely on users' generosity (i.e. tips) to make a living? And then made a 40% profit margin and donated the profits to GiveWell? That also sounds unethical - you're taking with one hand and giving with the other. But in a capitalist society like the U.S., it's just business as usual. 1. ^ Under federal law and in most U.S. states, employers can pay tipped workers less than the minimum wage as long as their wages and tips add up to at least the minimum wage. However, many employers get away with not ensuring that tipped workers earn the minimum wage, or outright stealing tips.
I've seen EA writing (particularly about AI safety) that goes something like: I know X and Y thought leaders in AI safety, they're exceptionally smart people with opinion A, so even though I personally think opinion B is more defensible, I also think I should be updating my natural independent opinion in the direction of A, because they're way smarter and more knowledgeable than me. I'm struggling to see how this update strategy makes sense. It seems to have merit when X and Y know/understand things that literally no other expert knows, but aside from that, in all other scenarios that come to mind, it seems neutral at best, otherwise a worse strategy than totally disregarding the "thought leader status" of X and Y. Am I missing something?

Saturday, 17 February 2024
Sat, 17 Feb 2024

Frontpage Posts

Quick takes

Signal boosting my twitter poll, which I am very curious to have answered: https://twitter.com/BondeKirk/status/1758884801954582990 Basically the question I'm trying to get at is whether having hands-on experience training LLMs (proxy for technical expertise) makes you more or less likely to take existential risks from AI seriously.
In AI Safety, it seems easy to feel alienated and demoralised. The stakes feel vaguely existential, most normal people have no opinion on it, and it never quite feels like "saving a life". Donating to global health or animal welfare feel more direct, if still distant. I would imagine a young software developer discovers EA, and then AI Safety, hoping to do something meaningful. But the moments after that, feel much the same as it would a normal job. Curious if others feel the same.

Friday, 16 February 2024
Fri, 16 Feb 2024

Frontpage Posts

Quick takes

I'm a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be "thought leaders" in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn't make strong statements against AI growth and development even if they wanted to, because of their job and position. The recent post "Sam Altman's chip ambitions undercut OpenAI's safety strategy"  seems correct and important, while also almost absurdly obvious - the guy is trying to grow his company and they need more and better chips. We don't seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise the backlash. I agree that these CEOs could be considered thought leaders in AI in general and the Future and potential of AI, and their statements about safety and the future are critically practically important and should be engaged with seriously. But I don't really see the point of engaging with them as thought leaders in the AI safety discussion, it would make more sense to me to rather engage with intellectuals and commentators who can fully and transparently share their views without crippling levels of compromisation. I'm interested to hear though arguments in favour of taking their thoughts more seriously.
Heads up! I'm planning a Draft Amnesty event (like this one). I think the last one went really well, and I'm pretty excited to run this.  The Draft Amnesty event will probably be a week long, around mid-march.  I'll likely post some question threads such as "What posts would you like to see someone write?" (like this one) and "What posts are you thinking of writing?" (like this one), and set up some gather.town co-working/ social opportunities for polishing posts/ writing up drafts in the build up.  I'm also brainstorming ways to make draft amnesty posts appear as a different genre to Forum users (such as a different font for the title, a different page for draft posts, or a visible "draft amnesty" tag that can be seen from the frontpage list view), and let them opt out of seeing them. This should ameliorate concerns about the frontpage being full of lower-standard content (though fwiw I think this is unlikely because of the karma system), and also to take some more pressure off the posters (I don't want people to not post because they worry their draft isn't polished enough!) I'll put up a proper announcement soon, with more of a plan, but feel free to use the comments of this quick take to share any things you would be excited to see, ideas, concerns, or questions. 

Load more days