Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While this pattern shows the inconsistency between how humans and AI are treated, there have been many examples over history where the ability to do at an increased scale something that was already familiar, results in the law being changed.

Shining a torch at a plane is usually fine, shining a laser at them usually is a crime.



None of the people who claim LLMs are intelligent and "persons" argue for giving them legal personhood and human rights. Telling.


You're one of today's lucky 10,000*:

Blake Lemoine, 2.5 years ago, hired a lawyer to make this exact argument: https://www.businessinsider.com/suspended-google-engineer-sa...

Myself, every time this topic comes up, I point to the fact that the philosophy of mind has 40 different definitions of "consciousness" which makes it really hard for any two people to even be sure they're arguing about the same thing when they argue if any given AI does or doesn't have it.

(Also: They can be "persons" legally without being humans, cf. corporate personhood; and they can have rights independently of either personhood or humanity, cf. animal welfare).

* https://xkcd.com/1053/


I knew about that case (though I don't know the specifics of how sophisticated the LaMDA model was at the time, I don't know if it ever was available online so I could try it). AFAICT Blake Lemoine was not concerned with copyright at all, he just genuinely believed the model was sentient.

What I meant ("none" is obviously a hyperbole, though not by much) is that people argue that "AI" (they always use this term rather than the more descriptive ML or LLM or generative models) is somehow special and either that the mixing of input material is sufficient to defeat copyright or that it somehow magically doesn't apply for reasons they either cannot describe or which include the word "intelligence".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: