Merging photo textures - (from calibrated cameras) - projected onto geometry
- by freakTheMighty
I am looking for papers/algorithms for merging projected textures onto geometry. To be more specific, given a set of fully calibrated cameras/photographs and geometry, how can we define a metric for choosing which photograph should be used to texture a given patch of the geometry.
I can think of a few attributes one may seek minimize including the angle between the surface normal and the camera, the distance of the camera from the surface, as well as minimizing some parameterization of sharpness.
The question is how do these things get combined and are there well established existing solutions?