All of beth​'s Comments + Replies

Fighting human rights violations around the globe.

Answer by beth​Sep 07, 201913
0
0

I believe your assessment is correct, and I fear that EA hasn't done due diligence on AI Safety, especially seeing how much effort and money is being spent on it.

I think there is a severe lack of writing on the side of "AI Safety is ineffective". A lot of basic arguments haven't been written down, including some quite low-hanging fruit.

4
Anthony DiGiovanni
5y
While I disagree with his conclusion and support FRI's approach to reducing AI s-risks, Magnus Vinding's essay "Why Altruists Should Perhaps Not Prioritize Artificial Intelligence" is one of the most thoughtful EA analyses against prioritizing AI safety I'm aware of. I'd say it fits into the "Type A and meets OP's criterion" category.

As per my initial comment, I'd compare it to pre-WWII Netherlands banning government registration of religion. It could safe tens of thousands of people from deportation and murder.

4
Larks
5y
It seems like a big distinction between the two lies in how quickly they could be rolled out. A pre-WWII database of religion would have taken a long time to create, so pre-emptively not creating one significantly inhibited the Germans, while the US already had the census data so could intern the Japanese. But it doesn't seem likely that not using facial recognition now would make it significantly harder to use later.
6
kbog
5y
OK, sounds like the biggest issue is not the recognition algorithm itself (can be replicated or bought quickly) but the acquisition of databases of people's identities (takes time and maybe consent earlier on). They can definitely come together, but otherwise, consider the possibilities (a) a city only uses face recognition for narrow cases like comparing video footage to a known suspect while not being able to do face-rec for the general population, and (b) a city has profiles and the ability to identify all its citizens for some other purpose but just doesn't have the recognition algorithms (yet).
5
kbog
5y
Okay, very well then. But if a polity wanted to do something really bad like ethnic cleansing, they would just allow facial recognition again, and get it easily from elsewhere. If a polity is liberal and free enough to keep facial recognition banned then they will not tolerate ethnic cleansing in the first place. It's like the Weimar Republic passing a law forbidding the use of Jewish Star armbands. Could provide a bit of beneficial inertia and norms, but not much besides that.
For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, gri
... (read more)
4
kbog
5y
But who is talking about banning facial recognition itself? It is already too widespread and easy to replicate.

I don't have any specific instances in mind.

Regarding your accounting of cases, that was roughly my recollection as well. But while the posts might not address the second concern directly, I don't think that the two concerns are separable. The actual mechanisms and results might largely overlap.

Regarding the second concern you mention specifically, I would not expect those complaints to be written down by any users. Most people on any forum are lurkers, or at the very least they will lurk a bit to get a feel for what the community is like and wha... (read more)

Are there any plans to evaluate the current karma system? Both the OP and multiple comments expressed worries about the announced scoring system, and in the present day we regularly see people complain about voting behaviour. It would be worth knowing if the concerns from a year ago turn out to have been correct.

Related to this, I have a feature request. Would it be possible to break down scores in a more transparent way, for example by number of upvotes and downvotes? The current system gives very little insight to authors about how much people like their... (read more)

Are there particular instances of complaints related to voting behavior that you can recall?

I remember seeing a couple of cases over the last ~8 months where users were concerned about low-information downvotes (people downvoting without explaining what they didn't like). I don't remember seeing any instances of concern around other aspects of the current system (for example, complaints about high-karma users dominating the perception of posts by strong-voting too frequently). However, I could easily be forgetting or missing comments along those... (read more)

Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.

I was talking with a colleague the other day about an AI organization that claims:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.

To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard e... (read more)

This seems like selective presentation of the evidence. You haven't talked about AlphaZero or generative adversarial networks, for instance.

Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms

80% by what metric? Is your claim that Facebook could find your face in a photo using logistic regression if it had enough clean data? (If so, can you show me a peer-reviewed paper supporting this claim?)

Presumably you are saying something like: "80% of the human labor w... (read more)

This is mostly a problem with an example you use. I'm not sure whether it points to an underlying issue of your premise:

You link to the exponential growth of transistor density. But that growth is really restricted to just that: transistor density. Growing your number of transistors doesn't necessarily grow your capability to compute things you care about, both from a theoretical perspective (potential fundamental limits in the theory of computation) as well as a practical perspective (our general inability to write code that makes use of much ci... (read more)

These are some issues that actively frustrate me to the point of driving me away from this site.

  • Loading times for most pages are unbearably slow. So are most animations (like the menu from clicking your username top right).
  • Many features break badly when Javascript is turned off.
  • Text field for bio is super small and cannot be rescaled.
  • Super upvotes have their use but the super downvote just encourages harsh voting behaviour.
  • The contrast on the collapse comment button is minimal, same for a number of other places.
  • Basic features take much effort to navigate t
... (read more)
5
Ben Pace
5y
Thx. My sense is you might get more of the experience you want using ea.greaterwrong.com, which doesn't require javascript, is focused on speed, and generally has a lot of custom options. The site has all identical content.

Sure it is, but I know a lot more about myself than I do about other people. I could make a good guess on impact on myself of a worse guess on impact on others. It's a bias/variance trade-off of sorts.

I'd say the two are valuable in different ways, not that one is necessarily better than the other.

8
kbog
5y
If you understand economic and political history well enough to know what's really gotten you where you are today, then you already have the tools to make those judgments about a much larger class of people. Actually I think that if you were to make the arguments for exactly how D-Day or women's rights for instance helped you then you would be relying on a broader generalization about how they helped large classes of people.

Any technology comes with its own rights struggle. Universal access to super-longevity, the issue of allowing birth vs exploding overpopulation if everyone were to live many times longer, em rights, just to name a few. New tech will hardly have any positive effect if these social issues resolve in a wrong way.

3
Gavin
5y
Fair. But without tech there would be much less to fight for. So it's multiplicative.

Can you make a case as to why the two have enough notability separately to deserve their own separate Wikipedia pages?

The original book was well received and got significant amounts of attention (e.g. an excerpt ran in the NYT, Peter was on the Colbert Report to talk about it, etc.). It was also highly influential, and has contributed to the way a lot of EAs (including Cari Tuna) think about giving. I’m not sure how many languages it’s been translated into, but it’s a pretty good number.

The organization has also received attention from a variety of major media outlets and has moved a considerable amount of money to effective charities (~$5.25 million in 2018 and expected

... (read more)
9
Jemma
5y
I feel like it would be more appropriate for the organisation to have its own page, while information about the book could be divided as appropriate between that page, and those of effective altruism and Peter Singer.

Regarding 1), if I were to guess which events of the past 100 years made the most positive impact on my life today, I'd say those are the defeat of the Nazis, the long peace, trans rights and women's rights. Each of those carries a major socio-political dimension, and the last two arguably didn't require any technological progress.

I very much think that socio-political reform and institutional change are more important for positive long-term change than technology. Would you say that my view is not empirically grounded?

kbog
5y10
0
0

It's better to look at impacts on the broad human population rather than just one person.

3
Kirsten
5y
I'm not sure it's possible for me to distinguish between tech and social change. How can I talk about women's rights without talking about birth control (or even just tampons!)?
4
Gavin
5y
Good call. I'd add organised labour if I was doing a personal accounting. We could probably have had trans rights without Burou's surgeries and HRT but they surely had some impact, bringing it forward(?) No, I don't have a strong opinion either way. I suspect they're 'wickedly' entangled. Just pushing back against the assumption that historical views, or policy views, can be assumed to be unempirical. Is your claim (that soc > tech) retrospective only? I can think of plenty of speculated technologies that swamp all past social effects (e.g. super-longevity, brain emulation, suffering abolitionism) and perhaps all future social effects.
it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology.

Can you expand a bit on this statement? I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted. When I personally try to advocate against th... (read more)

I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted.

This post of yours is at +28. The most upvoted comment is a request to see more stuff from you. If EA was an ideology, I would expect to see your post at a 0 or negative score.

There's no shortage of subreddits where stuff that goes against community beliefs rarely scores above 0. I would guess most su... (read more)

4
Gordon Seidoh Worley
5y
Sure, this is the ideology part that springs up and people end up engaging with. Thinking of EA as a question can help us hew to a less political, less assumption-laden approach, but this can't stop people entirely from forming an ideology anyway and hewing to that instead, producing the types of behaviors you see (and that I'm similarly concerned about, as I've noticed and complained about similar voting patterns as well). The point of my comment was mostly to save the aspiration and motivation for thinking of EA as a question rather than ideology, as I think if we stop thinking of it as a question it will become nothing more than an ideology and much of what I love about EA today would then be lost.

It is most apparent in this piece of the review:

He also points out that Tanzanian natives using their traditional farming practices were more productive than European colonists using scientific farming. I’ve had to listen to so many people talk about how “we must respect native people’s different ways of knowing” and “native agriculturalists have a profound respect for the earth that goes beyond logocentric Western ideals” and nobody had ever bothered to tell me before that they actually produced more crops per acre, at least some of the time. That w
... (read more)

For a different take on the consequences of being "rational", I would highly recommend James C. Scott's book Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. The book summary of SSC is pretty good, but when he gives his opinion on the book he seems to have missed the point of the book entirely.

4[anonymous]5y
What do you think is the point of the book that SSC missed?

Thank you for your response.

Yes, that is what I meant. If you could convince me that AGI Safety were solvable with increased funding, and only solvable with increased funding, that would go a long way in convincing me of it being an effective cause.

In response to your question of giving up: If AGI were a long way off from being built, then helping others now is still a useful thing to do, no matter if either of the scenarios you describe were to happen. Sure, extinction would be bad, but at least from some person-affecting viewpoints I'd say extinction is not worse than existing animal agriculture.

Let me try to rephrase this part, as I consider it to be the main part of my argument and it doesn't look like I managed to convey what I intended to:

AI Safety would be a worthy cause if a superintelligence were powerful and dangerous enough to be an issue but not so powerful and dangerous as to be uncontrollable.

The most popular cause evaluation framework within EA seems to be Importance/Neglectedness/Tractability. AI Safety enthusiasts tell a convincing story on importance and neglectedness being good and make an effort at arguing that tractability ... (read more)

2
HunterJay
5y
Ah, thanks for rephrasing that. To make sure I’ve got this right - there’s a window between something being ‘easy to solve’ and ‘impossible to solve’ that a cause has to exist in to be worth funding. If it were ‘easy to solve’ it would be solved in the natural course of things, but if it were ‘impossible to solve’ there’s no point working on it. When I argue that AGI safety won’t be solved in the normal course of AGI research, that is an argument that pushes it towards the ‘impossible’ side of the tractability scale. We agree up to this point, I think. If I’ve got that right, then if I could show that it would be possible to solve AGI safety with increased funding, you would agree that it’s a worthy cause area? I suppose we should go through all the literature and judge for ourselves if progress is being made in the field. That might be a bit of a task to do here, though. For the sake of argument, let’s say technical alignment is a totally intractable problem, when then? Give up and let extinction happen? If the problem does turn out to be impossible to solve, then no other cause area matters either because everybody is dead. If the problem is solvable, and we build a superintelligence, then still no other cause area matters because a superintelligence would be able to solve those problems. This is kind of why I expected your argument to be about whether a superintelligence will be built, and when. Or about why you think that safety is a more trivial problem than I do. If you’re arguing the other way -- that safety is an impossible problem -- then wouldn’t you instead argue for stopping it being built in the first place? I don’t know how tractable technical alignment will turn out to be. There has been some progress, but my main takeaway has been “We’ve discovered X, Y, and Z won’t work.”. If there is still no solution as we get closer to AGI being developed, then at least we’ll be able to point to that failure to try and slow down dangerous projects. Maybe the
4
Linda Linsefors
5y
Thanks for pointing this out :) Should be fixed now

Thank you for this nice summary of the argument in favour of AI Safety as a cause. I am not convinced, but I appreciate your write-up. As you asked for counterarguments, I'll try to describe some of my gripes with the AI Safety field. Some have to do with how there seems to be little awareness of results in adjacent fields, making me doubt if any of it would stand up to scrutiny from people more knowledgeable in those areas. There are also a number of issues I have with the argument itself.

Where’s does it end? Well, eventually, at the theoretical limi
... (read more)
5
HunterJay
5y
Sorry for the delay on this reply. It’s been a very busy week. Okay, so, to be clear -- I am making the argument that superintelligence safety is an important area that is underfunded today, and you are arguing that extinction caused by superintelligence is so unlikely that it shouldn’t be a concern. Is that accurate? With that in mind, I’ll go through you points here one by one, and then attempt to address some of arguments in your blog posts (though the first post was unavailable!). I agree with you here. My reason for bringing this up in the main post was to show that superintelligence is possible under today’s understanding of physics. Raw computation is not intelligent by itself, we agree, but rather one requirement for it. I was just pointing out the computation that could be done in a small amount of matter is much larger than the computation that is done in the brain. (And that the brain’s computation is in a pattern that we call general intelligence). I didn’t mention a lot of good research relevant to safety, and progress is being made in many independant directions for sure. I do agree, I would also like to see more of a crossover, though I really don’t know how much the two areas are already working off the other’s progress. I’d be surprised if it were zero. Regardless, if it were zero, it would show poor communication, rather than say anything about the concerns being wrong. I mean, there’s no rule that a superintelligence has to misunderstand you. And there’s no certainty instrumental convergence is correct. (I wouldn’t risk my life on either statement!) It’s just that we think being smarter would help achieve most goals, so we probably should expect a superintelligence to try and make itself smarter. The other part is we just don’t know how to guarantee that a superintelligence will do what we mean. (If you do know how to do this, that would be a huge relief). Even in your example of trying to get an superintelligence just to make itself smarter
2
HunterJay
5y
Thanks for your response! I just wanted to let you know I'm taking the time to read your links and write out a well thought out reply, which might take another evening or two.
9
Tetraspace
5y
(Under the background assumptions already being made in the scenario where you can "ask things" to "the AI":) If you try to tell the AI to be smart, but fail and instead give it some other goal (let's call it being smart'), then in the process of becoming smart' it will also try to become smart, because no matter what smart' actually specifies, becoming smart will still be helpful for that. But if you want it to be good and mistakenly tell it to be good', it's unlikely that being good will be helpful for being good'.
ahead of their time, in the sense that if they hadn't been made by their particular discoverer, they wouldn't have been found for a long time afterwards?

This definition is surprisingly weak, and in fact includes some scientific results that were way past their time. One striking example is Morley's trisector theorem, which is an elegant fact in Euclidean 2d geometry which had been overlooked for 2000 years. If not for Morley, this fact might have remained unknown for millennia longer.

1. The mechanics of cryptographic attack and defense are more complicated that you might imagine. This is because (a) there is a huge difference between the attack capabilities of nations versus those of other maligne actors. Even if the NSA, with its highly-skilled staff and big budget, is able to crack your everyday TLS traffic, doesn't mean that your bank transactions aren't safe against petty internet criminals. And (b) state secrets typically need to be safe against computers of 20+ years in the future, as you don't want enemy states to... (read more)

I remember EA-aligned vegan Youtuber Unnatural Vegan making a video about this argument last week in response to a recent Vox article. She argues that the meat industry is very elastic, but I don't think she cites any specific sources. As she normally does tend to do that, I suspect those numbers are hard to come by.

3b justifies 3a, as well as that I have a much easier time paying attention to the talk. In video, there is too much temptation to play at 1.5x speed and aim for an approximate understanding. Though I guess watching the video together with other people also helps.

As for 3b, in my experience asking questions adds a lot of value, both for yourself as well as for other audience members. The fact that you have a question is a strong indication that the question is good and that other people are wondering the same thing.

I like your list. Here is my conference advice, contradicting some of yours, based mostly on my experience with academic conferences:

1. Focus on making friends. Of course it would be good to have productive discussions and make useful connections, but it is most important to know some friendly faces and feel comfortable. For me it works best to talk about unrelated things like hobbies, not about work or EA or anything like that.

2. Listening to talks is exhausting, so don't force yourself to attend too many of them. It is fine to pick just the 2-3 most... (read more)

2
Risto Uuk
5y
Can you expand on 3a and 3b? I guess 3b justifies 3a, but is that all? Watching and discussing a video with your local group appears to me to be more valuable than asking one question at a talk, but I may be missing some important benefits that you are aware. I would also add that these are not mutually exclusive. I have heard that some people struggle to set time to watch talks on their own, that is also something to consider.

The issue is that FLOPS cannot accurately represent computing power across different computing architectures, in particular between single CPUs versus computing clusters. As an example, let's compare 1 computer of 100 MFLOPS with a cluster of 1000 computers of 1 MFLOPS each. The latter option has 10 times as many FLOPS, but there is a wide variety of computational problems in which the former will always be much faster. This means that FLOPS don't meaningfully tell you which option is better, it will always depend on how well the problem you want... (read more)

1
johncrox
5y
I remember looking into communication speed, but unfortunately I can't find the sources I found last time! As I recall, when I checked the communication figures weren't meaningfully different from processing speed figures. Edit: found it! AI Impacts on TEPS (traversed edges per second): https://aiimpacts.org/brain-performance-in-teps/ Yeah, basically computers are closer in communication speed to a human brain than they are in processing speed. Which makes intuitive sense - they can transfer information at the speed of light, while brains are stuck sending chemical signals in many (all?) cases. 2nd edit: On your earlier point about training time vs. total engineering time..."Most honest" isn't really the issue. It's what you care about - training time illustrates that human-level performance can be quickly surpassed by an AI system's capabilities once it's built. Then the AI will keep improving, leaving us in the dust (although the applicability of current algorithms to more complex tasks is unclear). Total engineering time would show that these are massive projects which take time to develop...which is also true.

I don't think that 11% figure is correct. It depends on how long you would stay at the company if you would get the job, and on the time you would be unemployed for if the offer were rescinded.

2
Denkenberger
5y
Good point!

Without commenting on your wider message, I want to pick on two specific factual claims that you are making.

AlphaZero went from a bundle of blank learning algorithms to stronger than the best human chess players in history...in less than two hours.

Training time of the final program is a deeply misleading metric, as these programs have been through endless reruns and tests to get the setup right. I think it is most honest to count total engineering time.

I know people are wary of Kurzweil, but he does seem to be on fairly solid ground here.

Extrapolating FLO... (read more)

1
Girish_Sastry
5y
I don't think I quite follow your criticism of FLOP/s; can you say more about why you think it's not a useful unit? It seems like you're saying that a linear extrapolation of FLOP/s isn't accurate to estimate the compute requirements of larger models. (I know there are a variety of criticisms that can be made, but I'm interested in better understanding your point above)

The EA forum doesn't seem like an obvious best choice. Just because it is related to EA does not make it effective, especially considering the existence of discussion software like Reddit, Discourse, and phpBB.

I'd say it mostly depends on what kind of skills and career capital you are aiming for. There are a number of important (scientific) software packages with either zero or one maintainers, which could be useful to work on either upstream or downstream.

Personally, I am presently just doing (easy) fixes for bugs that I run into myself. But I a... (read more)

I used to think pretty much exactly the argument you're describing, so I don't think I will change my mind by discussing this with you in detail.

On the other hand, the last sentence of your comment makes me feel that you're equating my not agreeing with you with my not understanding probability. (I'm talking about my own feelings here, irrespective of what you intended to say.) So, I don't think I will change your mind by discussing this with you in detail.

I don't feel motivated to go back and forth on this thread, because I t... (read more)

kbog
5y18
0
0
On the other hand, the last sentence of your comment makes me feel that you're equating my not agreeing with you with my not understanding probability. (I'm talking about my own feelings here, irrespective of what you intended to say.)

Well, OK. But in my last sentence, I wasn't talking about the use of information terminology to refer to probabilities. I'm saying I don't think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, b... (read more)

Thank you for your response and helpful feedback.

I'm not making any predictions about future cars in the language section. "Self-driving cars" and "pre-driven cars" are the exact same things. I think I'm grasping at a point closer to Clarke's third law, which also doesn't give any obvious falsifiable predictions. My only prediction is that thinking about "self-driving cars" leads to more wrong predictions than thinking about "pre-driven cars".

I changed the sentence you mention to "If you want... (read more)

3
John_Maxwell
5y
That is clearer, thanks! Well, it's already possible to write code that exhibits some of the failure modes AI pessimists are worried about. If discussions about AI safety switched from trading sentences to trading toy AI programs, which operate on gridworlds and such, I suspect the clarity of discourse would improve. Cool, let me know!

My troubles with this method are two-fold.

1. SHA256 is a hashing-algorithm. Its security is well-vetted for certain kinds of applications and certain kinds of attacks, but "randomly distribute the first 10 hex-digits" is not one of those applications. The post does not include so much as a graph of the distribution of what the past drawing results would have been with this method, so CEA hasn't really justified why the result would be uniformly distributed.

2. The least-significant digits in the IRIS data are probably fungible by adversaries.... (read more)

2
SamDeere
5y
Re 1, this is less of a worry to me. You're right that this isn't something that SHA256 has been specifically vetted for, but my understanding is that the SHA-2 family of algorithms should have uniformly-distributed outputs. In fact, the NIST beacon values are all just SHA-512 hashes (of a random seed plus the previous beacon's value and some other info), so this method vs the NIST method shouldn't have different properties (although, as you note, we didn't do a specific analysis of this particular set of inputs — noted, and mea culpa). However, the point re 2 is definitely a fair concern, and I think that this is the biggest defeater here. As such, (and given the NIST Beacon is back online) we're reverting to the original NIST method. Thanks for raising the concerns. ETA: On further reflection, you're right that it's problematic knowing whether the first 10 hex digits will be uniformly distributed given that we don't have a full-entropy source (which is a significant difference between this method and the NIST beacon — we just made sure that the method had greater entropy than the 40 bits we needed to cover all the possible ticket values). So, your point about testing sample values in advance is well-made.

I'd like to see some justification for using this approach over the myriad of more responsible ways of generating random draws.

6
SamDeere
5y
The draw should to have the following properties: * The source of randomness needs to be generated independently from both CEA and all possible entrants * The resulting random number needs to be published publicly * The randomness needs to be generated at a specific, precommitted time in the future * The method for arriving at the final number should ideally be open to public inspection This is because, if we generated the number ourselves, or used a private third-party, there's no good guarantees against collusion. Entrants in the lottery could reasonably say 'how do I know that the draw is fair?', especially as the prize pool is large enough that it could incentivise cheating. The future precommitment is important because it guarantees that we can't secretly know the number, and the specific timing is important because it means that we can't just keep waiting for numbers to be generated until we see one that we like the look of. The method proposed above means that anyone can see how we arrived at the final random number, because it takes a public number that we can't possibly influence, and then hashes it using SHA256, which is well-verified, deterministic (i.e. anyone can run it on their own computer and check our working) and distributes the possible answers uniformly (so everyone has an equal chance of winning). Typical lottery drawings have these properties too: live broadcast, studio audience (i.e. they are publicly verifiable), balls being mixed and then picked out of a machine (i.e. an easy-to-inspect, uniformly-distributed source of randomness that, because it is public, cannot be gamed by the people running the lottery). Earthquakes have the nice property that their incidence follows a rough power law distribution (so you know approximately how regularly they'll happen), but the specifics of the location, magnitude, depth or any other properties of any given future earthquake are entirely unpredictable. This means that we know that there will be
2
Paul_Christiano
5y
I'm not sure what the myriad of more responsible ways are. If you trust CEA to not mess with the lottery more than you trust IRIS not to change their earthquake reports to mess with the lottery, then just having CEA pick numbers out of a hat could be better. It definitely seems like free-riding on some other public lottery drawing that people already trust might be better.
2
richard_ngo
5y
Can you give some examples of "more responsible" ways? I agree that in general calculating your own random digits feels a lot like rolling your own crypto. (Edit: I misunderstood the method and thought there was an easy exploit, which I was wrong about. Nevertheless at least 1/3 of the digits in the API response are predictable, maybe more, and the whole thing is quite small, so it might be possible to increase your probability of winning slightly by brute force calculating possibilities, assuming you get to pick your own contiguous ticket number range. My preliminary calculations suggest that this method would be too difficult, but I'm not an expert, there may be more sophisticated hacks).