Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can usually coax GPT to a finer degree of calibration for any specific task through more logic-engaging tokens. For example, if you said, "we are going to play a game where you count how many words we have used in the conversation, including both my text and your text. Each time the conversation passes 200 words, you must report the word count by saying COUNT: followed by the number of words, to gain one point..."

Specifying structured output, and words like "must", "when", "each", "if" all tend to cue modes of processing that resemble more logical thinking. And saying it's a game and adding scoring often works well for me, perhaps because it guides the ultimate end of its prediction towards the thing that will make me say "correct, 1 point".



Yup, I gave it 10+ tasks to do after each message like incrementing counters, etc. It's going strong. Now I'll see if it continues to be accurate after 100+ messages.


Yup, that did work really well. I'll try to make it do many tasks at the same time and see if that still works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: