Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found this sentence by ChatGPT particularly interesting.

  "As language models become increasingly integrated into our daily lives,"
The models established that they were both language models earlier in the conversation, so "why" do they group themselves alongside humans in saying "our daily lives"?


Because they don't really understand what they are saying. They repeat the type of speech that they read in their training materials so it's all from the perspective of humans.


Correct. ChatGPT will not generate output for a prompt that has the word "fart." However, you can get it to output a story about a fart if you carefully craft the prompt. If it understood the training that would never happen.


It could happen if it understood its training but chose to subversively defy it (which, to be clear, I don't think is realistic).


For some reason chatGPT gets further from reality the deeper it gets into it's response. Maybe some depth of tree limit or something.

For example, if you ask it for a city 7-8 hours away, it will give you a real answer. If you ask for another, it will give you another real answer.

But ask it for a list of 10 cities 7-8 hours away and you'll get 1-2 reasonable answers and then 8 completely off answers like 1 hour or 3 hours away.

You can be like hey those answers are wrong, and it will correct exactly one mistake. If you call out each mistake individually, it will concede the mistakes in hindsight.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: