Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
anthonypasq
8 days ago
|
parent
|
context
|
favorite
| on:
Jellyfin LLM/"AI" Development Policy
> It's pointless to have a conversation with a human about code they didn't write and don't understand.
this was a problem before LLMs
acdha
8 days ago
|
next
[–]
Scale can be transformational: getting shot was always bad but when guns lowered the skill requirement and increased lethality wars became even more deadly. LLMs greatly increase the pool of potential scammers and the cost of detecting them.
reply
mort96
8 days ago
|
prev
[–]
It was, and those PRs should be banned too...
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
this was a problem before LLMs