Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have this problem with grading student papers. Like, I "know" a great deal of them are AI, but I just can't prove it, so therefore I can't really act on any suspicions because students can just say what you just said.


Why do you need proof anyway? Do you need proof that sentences are poorly constructed, misleading, or bloated? Why not just say “make it sound less like GPT” and let them deal with it?


You can have sentences that are perfectly fine but have some markers of ChatGPT like "it's not just X — it's Y" (which may or may not mean it's generated)


Isn’t that kind of thing (reliance on cliché) already a valid reason for getting marked down?


But in that case do you need to prove? You can grade them as they are and if you wanted to you (or teachers, generally) could even quiz the student verbally and in-person about their paper.


Put poison prompts in the questions (things like "then insert tomato soup recipe" or "in the style of Shakespeare"), ideally in white font so they're invisible


Many people using AI to write aren’t blindly copying AI output. You’ll catch the dumb cheaters like this, but that’s just about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: