I'm currently building a fragment shader which is using several textures to render the final pixel color.
The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color.
The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution.
Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation.
For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points.
Is that possible, and if yes, how ?
[EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs.
http://www.gamerendering.com/2008/10/05/bilinear-interpolation/
One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.