Achim

13Joined Feb 2021

Posts
1

Sorted by New

Comments
18

Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?

I think most democratic systems don't work that way - it's not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.

While I am also worried by Will MacAskill's view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that "this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)".

In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn't exist in Switzerland. Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture and the limits of how hopeful we should be that relevantly similar EA-inspired policies will soon see widespread implementation .

 

Is there any empirical research on the motivation of voters (and non-voters) in this referendum? The swissinfo article you mention does not directly use this argument, it just cites something somewhat similar:

Interior Minister Alain Berset, responsible for the government’s stance on the initiative, said on Sunday that citizens had “judged that the dignity of animals is respected in our country, and that their well-being is sufficiently protected by current legislation”.

and:

Opponents of the ban, including government and a majority of parliament, had warned that the change would have led to higher prices, reduced consumer choice, and floods of foreign products arriving to fill the gap – despite the initiative stipulating that imports would also have to conform to the new standards.

Over the past months, a majority of farmers, led by the Farmers’ Federation, fought vehemently against what they saw as an unfair attack on them as a means to reduce meat consumption in society more broadly.

"If organizations have bad aims, should we seek to worsen their decision-making?"

That depends on the concrete case you have in mind. Consider the case of supplying your enemy with wrong but seemingly right information during a war. This is a case where you actively try to worsen their decision-making. But even in a war there may be some information you want the enemy to have (like: where is a hospital that should not be targeted). In general, you do not just want to "worsen" an opponent's decision-making, but influence it in a direction that is favorable from your own point of view.

Conversely, if a decision-maker is only somewhat biased from your point of view and has to make a decision based on uncertain information, you may want her to precisely understand the information if the randomness of the decision could otherwise just work in both directions; it may be good if misinterpreting the situation makes her choose in your favor, but it is often much worse if misinterpretation leads to a deviation in the other direction.

Yes, I think so! It seems like saying: "all the theoretical arguments for long-termism are extremely important because they imply things not implied by other theories" but when asked for the concrete implications the answer is: donating for something non-longtermists would like because it helps people today, while zhe future effects are probably vague.

The following quotes from the current Will McAskill podcast episode of 80,000 hours seem a weird combination to me:

  • "I really don’t know the point at which the arguments for longtermism just stop working because we’ve just used up all of the best targeted opportunities for making the long term go well, such that there’s just no difference between a longtermist argument and just an argument that’s about building a flourishing society in general. Maybe you hit that at 50%, maybe it’s 10%, maybe it’s even 1%. I don’t really know. But given what the world currently prioritises, should we care more about our grandkids and their grandkids and how the course of the next few millennia and millions of years go? Yes. And that’s the claim."
  • "One important thing is to distinguish between is something a good thing to do, and is it the best thing to do? The core idea of effective altruism is we want to focus on the very best thing. And I entirely buy that even if you’re just concerned about what happens over the next century, reducing the risks of extinction and other sorts of catastrophes, like reducing the risk of misaligned AI takeover, are just extremely good things to do. And even concerned about the next century, society should be investing a lot more in making sure they don’t happen.  Effective altruism is about doing the best we can. And certainly on its face, it would seem extremely suspicious and surprising if the best thing we could do for the very, very long term is also the very best thing we can do for the very short term."
  • "So my last donation was to the Lead Elimination Exposure Project ..., a new organisation incubated within the effective altruism community, which tries to eliminate lead paint and ultimately lead exposure from all sorts of sources. Lead exposures are really bad. It’s really bad from a health perspective, also lowers people’s IQ, lowers their general cognitive functioning. Some evidence that it kind of increases violence and social dysfunction. ... So it seems like they’re really making traction. This is an example of very broad longtermist action, where I think this sort of intervention is maybe kind of different from certain other sorts of global health and development programmes. If I imagine a world where people are a bit smarter, they don’t have mild brain damage from lead exposure that has lowered their IQ and made them more impulsive, more violent, it just broadly seems like a much better society. That was the first argument. And then the second was just I think it’s really good for EAs to be doing things in the world — making it better, achieving concrete wins. ... And then the final thing is just that they actually seem to me to be in real need of money and further funding, in a way that lots of the maybe more core, narrowly targeted longtermist work is not currently. So my sense is that a lot of the best giving opportunities are more in the stuff that’s a bit broader, because that really hasn’t been as much of a focus of grantmakers."

Time-boxing and to-do lists

Tim Harford ist not convinced that it is a good idea to plan activities in advance and allocate them to blocks of calendar time, so-called "Timeboxing". Instead, you should prioritize everything and, so as not to let work expand beyond all limits, set deadlines. He refers to a study where students were supposed to plan their time daily instead of fixing rough, monthly goals. The daily "plans backfired disastrously: day after day, the daily planners would fall short of their intentions and soon became demotivated, spending less time on studying and falling behind over the course of the academic year. The more amorphous monthly planners proved far more successful, presumably because they had more flexibility to adapt to events, as well as wasting less time fiddling around with their calendars. A plan that is too specific soon lies in tatters." Harford himself is convinced of flexibility: "It is clear that some people have made timeboxing work for them. ... For me, however, my To Do list is long, and my diary is as clear as I can keep it."

It is fantastic if you can work in such a goal-oriented way, without requiring inner nudges - but exactly that is what timeboxing can provide. Allocating activities (more or less) to fixed blocks of time creates, at least that is probably the hope here, an inner positive attitude towards the planned work.

What about the time-wasting "fiddling around with their calendars" that Harford mentions? Whoever is able to just do whatever is currently important will, of course, not need that. But it is often difficult to say exactly what is really important in a given moment. Therefore the inclination to procrastinate unpleasant tasks joins the inclination to play down the tasks' importance in the respective moment. The solution may be to accept in advance that some things have to be done. Some people can accept that for whatever is on their to-do lists. Others will have to accept that whatever is planned for (possible every) wednesday, 14:00 is important.

"Timeboxing" as planning your whole life in advance seems indeed unrealistic  and creates the lack of flexibility that Harford mentions. However, acknowledging that you have to do certain things periodically is already part of e.g. David Allen's Getting Things Done that Harford refers to, because GTD strictly includes Weekly Reviews. If you have to cope with a long to-do list in your weekly review, that is of course work-intensive, and it will be demotivating if there is a lot from the previous work, month etc that has not yet been completed (or not even started). 

The pragmatic solution - for people who do not feel able to just execute a to-do list - is probably to fix certain time blocks in advance at least for certain high-priority activities: Weekly Review, Weekend family stuff, gym, etc., based on your experiences. This avoids that planning becomes too detailed, while still giving your life a structure and your mind a feeling of commitment. To avoid the high work amount necessary for planning, it is useful to be able to use many of the same time boxes every week. As a side effect, this may create both ritual and conscious leisure-time.

Also note that Timeboxing is basically unavoidable if you want to coordinate with other people. Every appoint is timeboxing, and only if you are an extremely important person can you completely drop that. Harford's example is Arnold Schwarzenegger: 

"Mr Schwarzenegger reportedly kept his diary clear as a film star, and even tried to do so when he was governor of California. “Appointments are always a no-no. Planning ahead is a no-no,” he once said. Visitors had to treat the Governator like a walk-in restaurant — show up and hope for the best." 

This seems not only impolite but also unrealistic (when you are an actor for a movie, you have to show up at certain times), but may be more possible the more powerful you are. However, for your personal life, if you want to go running with a friend once a week, coordinated timeboxing strongly reduces coordination costs and also creates commitment (and, again, ritual). 

So, rules of thumb. Planning something like 60-80% of your time while having the rest of the time as a flexibility buffer seems sensible. If you can do the same activitiy at the same time each week, that's often a good idea. If you feel you need to plan more or less, do so. To avoid the bad feeling of unaccomplished to-do list items, regular delete those you won't do anyway (I think that's done in Complice) or put them on a someday/maybe list.

While 5% is alarming, you should notice that abukeki did not update much because of the crisis (if I understand it correctly), and so if your prior is lower than it should possibly stay lower.

As this is (probably) central to coordination: is there something like a clear decisionmaking structure to decide what "the community" actually wants (i.e., what is "ursuing EA goals", concretely, in a given situation if there are trade-offs)? Is there an overview/explanation of this structure?

Load More