We're going to do a Hackathon on Mon, 12/5 at Sports Basement Berkeley from 10am to 5pm following EAGxBerkeley.
- Who: anyone! software engineers will be primary contributors of course, but we will offer optional introductory sessions for the curious / aspiring developer. You do not have to have attended EAGxBerkeley to attend the Hackathon.
- Where: Sports Basement Berkeley at 2727 Milvia St.
- Note this is 15 - 20 minutes from the conference by public transit or car. We recommend taking BART.
- When: Mon, 12/5 from 10am - 5pm
- What: Work independently or with collaborators on EA-aligned project of your choosing
If you would like to submit a potential Hackathon project idea, please leave a comment!
As a Hackathon participant, be on the lookout for related events each day of the conference:
- On Friday evening, we'll meet for a social around dinner time
- On Saturday, we'll have a speed networking event for Software Engineers
- On Sunday, we'll have a Hackathon Q&A and planning session
The scheduled events for the Hackathon will begin Monday at 10am when Sports Basement Berkeley opens to participants.
- 10am — venue opens
- 10:15 — opening talk
- 10:30 — project pitches — people with ideas can share those to the group
- 10:45 — start work
- 12pm — lunch
- 4:55 — wrap up
We will also be offering optional 45-minute learning sessions:
- 10:45 — basics of git (for newcomers to coding)
- 1pm — setting up your development environment (for newcomers)
- 2pm — intro to frontend development (for everyone)
- 3pm — intro to backend development (for everyone)
These talks will be at a separate table so that we are minimally disruptive to people who are still working.
In terms of projects, we hope people will work on things that they find to be interesting and potentially impactful. Keep in mind, there may be the option to continue collaborating virtually, so don't limit the scope of your ambitions to what can be accomplished in a single day.
We hope you'll join us for the EA Hackathon! Please complete this Expression of Interest form and consider signing up for the EA Software Engineering mailing list. And don't forget to apply to EAGxBerkeley if you want to go to that too.
We would like to build a new landing page for https://impactmarkets.io that is really sleek and is consistent with the rest of the marketplace (currently at https://app.impactmarkets.io).
This is a great high impact project that should be easy to knock out in a day!! Thanks for the submission, Dony 🙌
Advocacy tools for effective giving:
OMG I love GWWC!!!
These ideas sounds cool, a friend and I are interested in these ideas.
A tentative outline:
An AWS infrastructure that can be called via an api and respond with the social media post
let us know if you have any more ideas on how to make this most impactful.
Count me in as someone interested in joining this project! Something I miss from the old GWWC before the website redesign was different donations had "badges" and it felt like a collectible game to donate to each EA charity at least once. Maybe something like this could be added back.
A bit of donation gamification can go a long way.
:c cant make it
Really cool ideas, Luke! Love to support social giving however possible. Both projects seems quite doable (e.g., spinning something up with Streamlit as a prototype). It's great that we could get something for the Data Scientists to work on.
Is that dashboard supposed to be public?
Some people seem to write somewhat private stuff (e.g. "I'm depressed"), and the timestamp can be cross-referenced with what name appeared on https://www.givingwhatwecan.org/about-us/members to deanonymize it
Alignment Ecosystem Development maintains a list of projects which volunteer-scale efforts could shift the needle on, and hosts monthly calls where people can pitch or join projects. Maybe someone could drop by one of those calls and see if anything fits the bill for this?
Also relevant: https://aisafetyideas.com/
This is great — I'll hit the next one on Nov. 15. Thanks for calling this group to our attention, plex.
Interactive consensus aggregator for squiggle estimates
If analysts Alice and Bob each write cost-effectiveness analysis of charity C, then donor Eve ought to be able to input relative trust quantities informing how to weight Alice's estimate against Bob's. In other words, if Eve thinks Alice is twice as trustworthy or competent as Bob, then the MVP would return the squiggle string
mixture(alice, bob, [2, 1])
.Interactive worldview substituter for squiggle estimates
Building on the above, it would be nice if Alice and Bob had agreed to use a set of input variables, such as background quantities like the state of a manifold market on some ML benchmark or a global count of malaria cases. This set of input variables can also be a worldview, or quantities which upon inspection are also squiggle estimates. Then, it would be nice to be able to trivially substitute these input worldviews, if you want to look at how you expect your cost-effectiveness analysis to change over time or if you think Bob has a more calibrated background worldview but prefer Alice's fermstimate of charity C's impact.
I'd love to see things like this. I plan on attending, mostly to help out others interested in anything around QURI/Squiggle/forecasting/epistemic tech
Awesome!! Looking forward to having you there, Ozzie.
Any updates on how this went? (I meant to follow up and participate more, but I was not able to do so).
Yes! Lessons Learned here.
Are there any links or notes specific to this particular project, the squiggle consensus cea builder thing?
Contribute to Wikiciv.org - A wiki for rebuilding civilization's technology
Ways you can help:
-Write and edit articles
-Research and collect content
-Work on a port of Entitree to make a tech tree visualization
No coding experience needed! Wikiciv has a "What You See Is What You Get" editor, if you can edit a google doc, you can edit Wikiciv.
This is an awesome way for nontechnical attendees to contribute — thanks for this, Jehan.
Some approaches to solving alignment go through teaching ML systems about alignment and getting research assistance from them.[1] Training ML systems needs data, but we might not have enough alignment research to sufficiently fine tune our models, and we might miss out on many concepts which have not been written up. Furthermore, training on the final outputs (AF posts, papers, etc) might be less good at capturing the thought processes which go into hashing out an idea or poking holes in proposals which would be the most useful for a research assistant to be skilled at.
It might be significantly beneficial to capture many of the conversations between researchers, and use them to expand our dataset of alignment content to train models on. Additionally, some researchers may be fine with having their some of their conversations available to the public, in case people want to do a deep dive into their models and research approaches.
The two parts of the system which I'm currently imagining addressing this are:
Ought's Elicit is the prime example.
I hear OBS might be a good tool for this.
Thank you for providing this outline, plex. I hope we get good engagement with this project.
Me too! Also, if this or any of your other projects needs a domain, AED's https://ea.domains/ might have a good match to offer. I'm also happy to host it on
Maybe a little late, but here is an android app that does recordings, you can contribute directly on the github.
Other potential project ideas that can help with this are:
There's an aggregated list of AI safety research projects available on AI Safety Ideas (forum post) and though it's a bit messy in there at the moment, it should be quite high quality leads for a hackathon as well! E.g. Neel Nanda and I will add a bunch of project ideas to the Interpretability Hackathon list during the next couple of days.
Watching.
(Copied from my post on your shortform, relating to Quinn's idea, Squiggle, and the tptool Quinn has worked on, and as discussed with Nicole)
Joint hackathon:
Part 1: Creating user-facing prototype for Quinn's Cost-effectiveness-analysis × Montecarlo comparison platform
Part 2: Bringing in statistics/quant/domain people to- test it and do independent evaluations (in teams or individually)
Assessing part 2 allows us ... compare and test both
Bring in Givewell people and others too?
See some note-taking on this here (If you want access to the Gitbook to add your own content let me know)
Thank you for reposting, David! I've reached out to GiveWell — they did have some suggested projects and promised to comment before the Hackathon.
I couldn't attend the interpretability hackathon and was hoping to get acquainted with LLM interpretability research as a sofware dev with no experience in interpretability or transformers. So here's a starting point following in the footsteps of this submission (see their writeup here):
Basically I am thinking we can use the hackathon as a collaborative study session to become more familiar with transformers and interpretability, ultimately culminating in replicating the results in the linked submission (it took them 3 days but since we have a starting point, possibly we can replicate their project and grok what they did much quicker).
Not shoehorned to this idea though. If you think there is a better avenue to using the hackathon to upskill in LLM interpretability and transformers, do share.
Nice — this seems ambitious, I really like this idea.
Maybe you can start a study group in GatherTown to continue this virtually as well. I'm sure you'd get takers from other folks interested in ML research.
New features and improvements to open source eCard generator for EA donations. The goal is to get people to donate to EA causes as gifts. There are a lot of ways the site can be improved and it would be great to have lots of ideas!
Repo: https://github.com/TLYCS/Card_Generator
Current Site: https://main.d33ee03vlquk8r.amplifyapp.com/
This looks so good already, Emma!! Nice work!
A website to crowdsource research for Impact List
Impact List is building up a database of philanthropic donations from wealthy individuals, as a step in ranking the top ~1000 people by positive impact via donations. We're also building a database of info on the effectiveness of various charities.
It would be great if a volunteer could build a website with the following properties:
-It contains pages for each donor, and for each organization that is the target of donations.
-Pages for donors list every donation they've ever made, with the date, target organization, amount, and any evidence that this donation actually happened.
-Pages for target organizations contain some estimate of each component of the ITN framework, as well as evidence for each of these components.
-There is a web form that allows any Internet user to easily submit a new entry into either of these data sources, which can then be reviewed/approved by the operators of Impact List based on the evidence submitted.
I love this idea!! It would be amazing if it could sway the ~1000 individuals listed on the site, however I suspect the true power is increasing awareness and engagement in effective giving. Super cool project.
I would love to see a personal finance tool that is geared towards a public facing (i.e., non-EA) audience that provides 1) useful financial guidance such as lifetime earnings projections and 2) concludes with a call to action to sign up for Try Giving / GWWC pledge.
I think it would be awesome to show people that they can donate a substantial portion of their income and still maintain a desirable standard of living.
Of course, EAs would benefit from this tool too, but my goal for it would be to introduce the idea of GWWC to a general audience.
The uncertain bibliography
A latex plugin for annotating a
.bib
file with credences, confidence intervals, squiggle strings, replication probabilities.That sounds great! This might tie into my idea (perhaps not original) that citations (author A cites author B) should be coded and footnoted by the nature of the citation (background, supporting, confirming, etc.)
This should help future readers (like 'what am I supposed to make of "Smith, 2018") and meta-analysts, and enter the bibliometric measures like H-index (rather than just 'count of citations').
See this thread or this short form
quinn, thanks for submitting your ideas! Let us know if you think of others to add.
Web app implementing giving pledges based on net income (after taxes and benefits)
The Giving What We Can Pledge is 10% of pre-tax income, if giving is tax-deductible, or 10% of post-tax income otherwise. We've scoped a different pledge, in which one gives the amount such that their net income after taxes and benefits falls 10%. You can also think of this as transferring 10% of your consumption to effective charity.
We propose building a web app where people enter their household information and it suggests the amount to give. The app would call the free, open source PolicyEngine API to compute taxes and benefits (I am the Executive Director of PolicyEngine, a nonprofit). I think in a day we can build a simple US prototype that collects a handful of data points (earnings, family size, state) and displays the computation.
I've discussed this idea with Luke Freeman, Executive Director of Giving What We Can, who supported experimenting with it.
Thanks for the project submission, Max
[off the top of my head] Some browser extension for Swapcard that lets you shuffle the list of people. This is premised on the un-checked-by-me assumption that Swapcard has some algorithm for the order in which it displays profiles to you, which might mean some people randomly get 'boosted' relative to others (e.g. if they are listed earlier in the database)
That said, if this is valuable, we can request it from Swapcard themselves
From a tech event we had recently - "The main request that came out of the meetup was a way to match up potential mentors with mentees"
Maybe something using the current EA Forum user search function but with a redesigned interface for connecting.
Before this gets out of hand - I think the request referred to was literally me writing down a random idea to get the ball rolling.
Thinking about this further —
Couldn't this be done via a Google Sheet?
(Of course, people could sign up on both the mentors + mentees tab if they wanted — we all need / can offer help in different areas)
There could be an associated Slack or Discord for people to post their learnings / requests for mentorship / etc.
Happy to have someone (a non-technical person maybe?) set this up during the Hackathon if they feel so inspired.
Definitely sounds like a worthy project — thanks for contributing, David
Hi all!
New-ish SWE dev here. My group and I would love to partake in the Hackathon but are struggling with thinking about ideas for feasible projects that could be completed in 7 hours. Is there a public list of previous projects we could reference? Or a list of projects maybe EAG thinks would be cool to do? Thanks.
We had a hackathon a month ago with some pretty interesting projects that you can build out further: Results link. These were made during 44 hours (average 17 hours spent per participants) but some of the principles seem generalizable.
You're also very welcome to check out aisi.ai or the intro resources for the interpretability hackathon (resources) running this weekend or the previous hackathon (resources).
Nice!! Really looking forward to the debrief from this event, Esben. Thanks for organizing!
Hi there cdenq,
Thanks so much for this question!! afaik, this is one of the first EA Hackathons of this sort, so we don't have a list of historical projects.
I wouldn't worry too much about setting the constraint of what you can accomplish in 5-6 hours (accounting for intro talks, socialization, and lunch here). Ideally, devs will continue to work on their contributions after the Hackathon ends.
If there's interest, I was thinking we may do a regular (e.g., once a month) coworking session on Gathertown to push this projects forward.
Let me know what other questions you have.
How many consumption-doublings does a policy reform generate?
GiveDirectly's unconditional cash transfers have long served as GiveWell's cost-effectiveness baseline. GiveWell's 2022 cost-effectiveness analysis estimates that GiveDirectly doubles the consumption of a person for a year for about $200. [1]
Tax and benefit reforms also affect households' consumption. My nonprofit, PolicyEngine, builds free open-source software that computes the impact of custom tax and benefit reforms on outcomes like poverty and the budget. This slide from my EAGxBerkeley lightning talk shows how we will display poverty impacts of a custom tax reform in our upcoming redesign.
My proposal is to add a chart to our app showing the impact of a policy reform on
sum(ln(net income))
, which could translate to total consumption-doublings for comparison with GiveDirectly. This could enable more data-driven EA-style advocacy for poverty-reducing policies. Currently we have models in the US and the UK, but we intend to expand to low- and middle-income countries in the future, and this visualization would translate to those new country models where reforms might be more cost-effective.Here's our GitHub. Our stack is Python/Flask/React.
Cell B87 shows that each philanthropic dollar spent generates
0.0034
units of value, which in this case is one-unit increases ofln(consumption)
. That is, it costs1/0.0034=$294
to increase a person'sln(consumption)
by one unit for a year. Doubling consumption for a year therefore costs$294 * ln(2) = $204
.Working on a forum post about animals and longtermism. I have an outline on a google doc and would love to have collaborators or just people to give feedback about the content.
Almost all projects started at hackathons die young, so if you want impact I think you should contribute to existing open-source projects. This is also a great way to develop software engineering skills, which are a key bottleneck for AI Safety research among other fields, and I think a day modelled on PyCon sprints would be a great hackathon experience.
I wrote up a list of specific issues here.