Note: this is an attempt to engage with and interpret a piece of legislation on the usage of AI. I don't have a strong opinion on this yet and expect it to be controversial which is why I preferred the Question Mode.

 

In the AI Act, i.e., EU's regulatory framework for the usage of AI-related technologies it is mentioned that 

The following artificial intelligence practices shall be prohibited:

the placing on the market, putting into service or use of an AI system that
exploits any of the vulnerabilities of a specific group of persons due to their
age, physical or mental disability, in order to materially distort the behaviour of
a person pertaining to that group in a manner that causes or is likely to cause
that person or another person physical or psychological harm. 

                                                                               Title II, Article 5, p. 43 (English Version).

I'll set up one interpretation of this statement in debate form: 

Question: should AI writers be prohibited in education? 

Claim: we can stretch this statement to apply to the usage of AI writing products employed by underage students for their assignments. This technology exploits students' inability to make a fully-informed and thoughtful decision as to what would be beneficial for their intellectual development and education. Therefore, the practice should be prohibited. 

Counterclaim: the AI system is not exploiting anyone's vulnerability as the notion of vulnerability should not be considered to entail one's proneness to dishonesty or cheating behaviors. Therefore, AI writers should not be prohibited and students should be held accountable for cheating when using AI writing models to compose their assignments. 

 

Feel free to continue the debate in the comments section. 

New Answer
Ask Related Question
New Comment

1 Answers sorted by

It would seem counterproductive, at least to policymakers who think AI is helpful, to place any kind of widespread ban on essay-writing AI, or to somehow regulate ChatGPT and others to ensure students don't use their platforms nefariously. Regulations won't keep with the times, and won't be understood well by lawmakers and enforcers.

As a student, ChatGPT has made me vastly more productive (especially as a student researcher in a field I don't know much about). It seems like this sort of technology is here to stay, so it seems useful for students to learn to incorporate the tool in their lives. I wasn't old enough to remember, but I assume a similar debate may have taken place with search engines.

There are probably a myriad of ways education institutions can pick up on cheating. If not used to analyse text as AI generated itself, institutions could possibly use AI to perform linguistic analysis on irregularities and writing patterns, like those used against the unabomber in his trial. Children especially, I assume, would have these writing patterns, though I am not qualified to speak on any of this. Cheaters tend to (in a self reinforcing cycle) not be so smart, so I would expect schools to find a way around them using AI.

Overall it seems more plausible and productive for schools to regulate this themselves. Where there is worry about academic misconduct, there will be market based solutions as there already exist for plagiarism checking.

I believe that artificial intelligence can help in education. And now there are many programs that students and schoolchildren need for effective learning. With the help of artificial intelligence, we can personalize the educational process and simplify certain technical tasks. However, there are some tasks that are best left to experienced professionals from https://paperspoint.com/ the online service I usually order high-quality academic papers and other similar content from.