EOF being -1 means that if (*ptr == EOF) can never fire if *ptr can only take on the values 0..255 [i.e., char is unsigned] as opposed to -128..127 [i.e., char is signed]. Whether or not char is signed or unsigned is implementation-defined.
When fgetc succeeds, the int it returns has a value based on interpreting the byte as an unsigned char. On a platform with an 8-bit char, that means it’s in the range 0 to 255. This is required by the C spec regardless of whether char is signed or unsigned. Meanwhile, EOF is required to be negative. Thus, you can always distinguish the cases as long as you look at the original int return value rather than casting to char.
Fun fact: This approach causes trouble on obscure embedded platforms where char and int are the same size (and therefore an unsigned char value can’t fit inside a signed int). Such platforms are allowed by the C standard as freestanding implementations that don’t implement the full standard library, but they can’t conformantly implement fgetc. https://stackoverflow.com/questions/3860943/can-sizeofint-ev...
Every byte in a file, as represented in an int, takes on a value of 0..255. fgetc doesn't return a char, it returns an int, which means it returns a value of -1..255 (i.e., 257 possible values). If you try to represent the return value of fgetc as a char, two of those values get the same representation, namely 255 and -1. The difference between ARM and x86 is that on ARM (unsigned char), the -1-as-255 is represented as 255 when cast back to an int for comparison, whereas on x86, the 255 when cast back to an int is -1, so it would return the same value as EOF (although it would also do so had the value in the file originally been -1).
Representing binary data as unsigned char (as opposed to char or signed char) is the norm however.