Hide table of contents
The host has requested RSVPs for this event
12 Going4 Maybe0 Can't Go
Nadia Montazeri
Manuel Allgaier
Eugenia Albano
Marta_Krzeminska
Sara Recktenwald
Laszlo
Laszlo Treszkai
nina antonyuk
Thomas Moispointner
danL
Julian
Oliver Kegel
Julian Posey
Flo
Pawel Sysiak
Toni

You go to an EA event. Everyone else seems to have two science degrees, runs a charity, has a healthy hobby and regularly calls their mum. Meanwhile, you're proud that you found one clean t-shirt to wear that day. 🙃 Imposter syndrome activates. Sounds familiar?

Publicly, we often share our successes, but rarely the mistakes that happened along the way. This can create a warped picture of reality, where everything goes smoothly.

How can we counter it? By spreading a meme that failures are a normal part of life.

What is a Fuck Up Night?

A Fuck Up Night is a type of event where people openly share their stories of failure. To be clear, it's not about humble brags, but projects that didn't work out, mistakes that really felt painful, times where you put in a lot of effort and the results were far from what you aimed for.

We have three confirmed speakers who already bravely committed to share their stories of failure:

  • Sara Recktenwald, co-founder of Impact Ops, formerly at EV UK/CEA
  • Nadia Montazeri, a biosecurity researcher, formerly at the European Commission
  • Marta Krzeminska, your host, an ex Head of Marketing at Mind Ease

After hearing their stories, we'll open the floor to hear from... you! Then we'll progress to informal chatting.


What you can expect from the evening:

👌 A supportive, non-judgemental atmosphere
👌 Honest stories from other accomplished EAs (like you!)
👌 Best snacks in North Berlin

Join the event to help create a more balanced picture of the EA community, connect with others, and spread the meme: failures are ok!

 

Schedule of the evening

  • 6.30 pm. Arrivals. Doors open, come to socialise!
  • 7 pm. Event start: sharing and celebrating failures.
  • 8.30 pm. Open socialising & snacks.

Please try to arrive on time to not disturb the speakers in the middle of their story-sharing.
 

FAQ

  • I'm not sure my story is worthy of sharing, it won't teach anyone anything.

    The goal of sharing is not to impress anyone with the magnitude of your failure or for the failure to be a lesson. If it felt like a failure to you, it's a valuable story for the crowd to hear.
     
  • I'd love to share my story, where do I sign up?

    That's great to hear! Feel free to comment on this post, this will give me a better idea of the expected numbers of sharers. 💡 And... consider bringing a prop that could symbolise the failure you'd be speaking about. E.g. a mug you were holding when you got rejected at the last application stage for an Important Job at an Impactful Organisation.
     
  • I'm not sure if I want to share anything, can I just come without any commitments?

    Of course! You're free to join without promising to sharing anything. And, if you change your mind during the event, the non-judgemental floor will be open for you.

7

0
0

Reactions

0
0
Comments2
Everyone who RSVP'd to this event will be notified.


Sorted by Click to highlight new comments since:

To everyone who joined the first FUN (F*uck Up Night) Session 💕 Thank you for open sharing, active listening, and co-creating a welcoming atmosphere! 

I have two things for you 👉 one ask and one gift. 

🙋‍♀️ Ask. Did you enjoy the session? Did you think it was a disaster? This anonymous feedback form is your chance to let me know! It takes just 2 min and will really help me improve future events of similar kind!

🎁 A gift of bonus links. Some resources that my attention has selectively caught before the event:

Cool that you're doing this! I could share two failures, one in my career plans and one in job applications. I could do that in 1-4min, depending on how many other people want to share and how much time we have. Looking forward! :)

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 ·