Can you elaborate on what qualitative difference do you see between imagination-as-you-understand-it and an internal simulation of a nonexistent (maybe future, maybe never happening) state of an agent's environment or inputs? There's an obvious quantitative difference - their environment is much simpler than ours, and their "imagination" is bound to imagining the near future (unlike us), but conceptually, where do you see the biggest difference?
Originality seems not to be the boundary, since even this simple model seems to imagine world states that they never saw, never will see, and possibly even aren't possible in their environment, i.e. they are "original" in some sense.
If I look at the common understanding of "imagination" and myself, what can I imagine? I can imagine 'what-if' scenarios of my future, e.g. what could be the outcome if I do this or that, or if something particular happens; I can imagine scenarios of my past, i.e., "replay" memories; I can imagine counterfactual scenarios that never happened and never will; I can imagine various senses - i.e. how a particular melody (which I'm "constructing" right now, iteratively, with the help of this imagination to guide my iterations) might sound when played in a band, or how something I'm drawing might look like when it's completed - all of this seems different variations on essentially the same thing, which is an internal simulation (model) generating data about various hypothetical states.
This might be used to evaluate different actions, but it might also be used to simply experience these states (i.e. daydream) or do something else - that's more of a question on how the agent would want to use the "imagination module", not a particular property of the imagination/internal simulation model itself.
You're right, but I think this paper is the first step on a (potentially very, very) very long road to building machines that could "imagine no possessions"
Originality seems not to be the boundary, since even this simple model seems to imagine world states that they never saw, never will see, and possibly even aren't possible in their environment, i.e. they are "original" in some sense.
If I look at the common understanding of "imagination" and myself, what can I imagine? I can imagine 'what-if' scenarios of my future, e.g. what could be the outcome if I do this or that, or if something particular happens; I can imagine scenarios of my past, i.e., "replay" memories; I can imagine counterfactual scenarios that never happened and never will; I can imagine various senses - i.e. how a particular melody (which I'm "constructing" right now, iteratively, with the help of this imagination to guide my iterations) might sound when played in a band, or how something I'm drawing might look like when it's completed - all of this seems different variations on essentially the same thing, which is an internal simulation (model) generating data about various hypothetical states.
This might be used to evaluate different actions, but it might also be used to simply experience these states (i.e. daydream) or do something else - that's more of a question on how the agent would want to use the "imagination module", not a particular property of the imagination/internal simulation model itself.