I am making a sprite based game, and I have a bunch of images that I get in a ridiculously large resolution and I scale them to the desired sprite size (for example 64x64 pixels) before converting them to a game resource, so when draw my sprite inside the game, I don't have to scale it.
However, if I rotate this small sprite inside the game (engine agnostically), some destination pixels will get interpolated, and the sprite will look smudged.
This is of course dependent on the rotation angle as well as the interpolation algorithm, but regardless, there is not enough data to correctly sample a specific destination pixel.
So there are two solutions I can think of. The first is to use the original huge image, rotate it to the desired angles, and then downscale all the reaulting variations, and put them in an atlas, which has the advantage of being quite simple to implement, but naively consumes twice as much sprite space for each rotation (each rotation must be inscribed in a circle whose diameter is the diagonal of the original sprite's rectangle, whose area is twice of that original rectangle, supposing square sprites).
It also has the disadvantage of only having a predefined set of rotations available, which may be okay or not depending on the game.
So the other choice would be to store a larger image, and rotate and downscale while rendering, which leads to my question.
What is the optimal size for this sprite? Optimal meaning that a larger image will have no effect in the resulting image.
This is definitely dependent on the image size, the amount of desired rotations without data loss down to 1/256, which is the minimum representable color difference.
I am looking for a theoretical general answer to this problem, because trying a bunch of sizes may be okay, but is far from optimal.