Are these non-standard applications of rendering practical in games?

Posted by maul on Game Development See other posts from Game Development or by maul
Published on 2012-03-27T12:52:38Z Indexed on 2012/03/27 17:45 UTC
Read the original article Hit count: 232

Filed under:
|

I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice.

Hybrid rendering

Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects?

Rendered physics

I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method.

This could be extended in a few ways:

  • For larger (non-bullet) objects you render a larger portion of the screen.
  • If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well.
  • You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick.

So are these techniques in use anywhere, and/or are they practical at all?

© Game Development or respective owner

Related posts about 3d

Related posts about programming