A

adnhw

Nanoelectrochemistry researcher @ Victoria University of Wellington
6 karmaJoined Aug 2022Pursuing an undergraduate degreeWorking (0-5 years)

Bio

Mathematics and Biology student at Victoria University of Wellington. Currently in the Effective Altruism Wellington, New Zealand group.

My favourite cause is education, because I'm a big fan of long termism and believe strongly in increasing the pool of minds around the world to solve our problems.

Comments
2

It's important not to feel as if you are "wasting" your life because people tell you that you are smart. It seems like a pretty good rule of thumb to prioritise the sustainability of your EA actions - making sure you are happy and comfortable in your job, and putting yourself first.

If you are truly intrisically interested in a career change towards something particularly effective, I wouldn't be super concerned about test scores, they probably aren't the best metric for how you'd do in grad study or fair in your career. Your GPA is great, and being from an "unremarkable" university won't matter. 

It seems like you may not be so comfortable in more quantitative fields, but 80k recommends heaps of areas that sound like a great fit: Philosophy and Psychology seem like particularly important areas for EAs!

A quick once over on their career reviews section reveals:

  • Population ethics researcher / policy analyst
  • Journalism
  • Research management
  • Non-technical roles in technical AI or biorisk research
  • Startup employee
  • Startup founding
  • Community building

Just to gauge more closely, it could be worth expanding that list, and running through this article.

80k has a lot of reflecting to do if what you say about them being not useful to most people is true. In my opinion though that they do try and frame things in a way that appeals to the average competent person!

Answer by adnhwJan 17, 20234
0
0

It would seem counterproductive, at least to policymakers who think AI is helpful, to place any kind of widespread ban on essay-writing AI, or to somehow regulate ChatGPT and others to ensure students don't use their platforms nefariously. Regulations won't keep with the times, and won't be understood well by lawmakers and enforcers.

As a student, ChatGPT has made me vastly more productive (especially as a student researcher in a field I don't know much about). It seems like this sort of technology is here to stay, so it seems useful for students to learn to incorporate the tool in their lives. I wasn't old enough to remember, but I assume a similar debate may have taken place with search engines.

There are probably a myriad of ways education institutions can pick up on cheating. If not used to analyse text as AI generated itself, institutions could possibly use AI to perform linguistic analysis on irregularities and writing patterns, like those used against the unabomber in his trial. Children especially, I assume, would have these writing patterns, though I am not qualified to speak on any of this. Cheaters tend to (in a self reinforcing cycle) not be so smart, so I would expect schools to find a way around them using AI.

Overall it seems more plausible and productive for schools to regulate this themselves. Where there is worry about academic misconduct, there will be market based solutions as there already exist for plagiarism checking.