Search Results

Search found 1575 results on 63 pages for 'pixel'.

Page 18/63 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • GLSL Shader Effects: How to do motion blur, etc?

    - by DevilWithin
    I am not sure how right it is to ask this question, but still here it goes. I have a full 2D environment, with sprites going around as landscape, characters, etc And to make it more state-of-art looking, i want to implement a motion blur effect, similar to modern FPS's (i.e. crysis) blur when moving fast the camera. In a sidescroller, the desired effect is having this slight blur appearing to give the idea of fast movement, when the camera is moving. If anyone could give me some tips on doing this, im assuming in a pixel shader, i'd be grate. Also, if anyone has other good tips on cool pixel shader effects for 2D games it would be awesome, like some stylizing post fx, such as previous Prince of Persia illustrative style. Thanks

    Read the article

  • What collision detection approach for top down car game?

    - by nathan
    I have a quite advanced top down car game and i use masks to detect collisions. I have the actual designed track (what the player see) with fancy graphics etc. and two other pictures i use as mask for my detection collisions. Each mask has only two colors, white and black and i check each frame if a pixel of the car collide with a black pixel of the masks. This approach works of course but it's not really flexible. Whenever i want to change the look of a track, i have to redraw the mask and it's a real pain. What is the general approach for this kind of game? How can i improve the flexibility of such a mask based approach?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Zooming options terminology

    - by Mark
    I've come up with 4 different ways to fit an image inside a viewing region, but I'm trouble coming up with names for them. Perhaps someone can suggest some? Fit image in viewing region, do not enlarge if image is smaller Size image so it fits snuggly inside the viewing region (enlarge if necessary) -- the image is as large as possible while still fitting within the viewing region Size image so that it fills the entire viewing region -- the image will be the same size or bigger than the viewing region 1:1 ratio; 1 pixel in the image corresponds to 1 pixel on screen All zooming options maintain aspect ratio. Stretching is just ugly, so it's not an option :)

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Oversizing images to produce better looking pages?

    - by Joannes Vermorel
    In the past, improper image resizing used to be a big no-no of web design (not mentioning improper compression format). Hence, for years I have been sticking to the policy where images (PNG or JPG) are resized on the server to match the resolution pixel-wise they will have with the rendered page. Now, recently, I hastily designed a HTML draft with oversized images, using inline CSS style such as width:123px and height:123px to resize the images. To my (slight) surprise, the page turned out to look much better that way. Indeed, with better screen resolution, some people (like me), tend to browse with some level of zoom (aka 125% or even 150% zoom), otherwise fonts are just too small on-screen. Then, if the image is strictly sized, the enlarged image appears blurry (pixel interpolation effect), but if the image is oversized the results is much better. Obviously, oversizing images is not an acceptable pattern if your website is intended for mobile browsing, but is there case where it would be considered as acceptable? Especially if the extra page weight is small anyway.

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • HTML5 clicking objects in canvas

    - by Dave
    I have a function in my JS that gets the user's mouse click on the canvas. Now lets say I have a random shape on my canvas (really its a PNG image which is rectangular) but i don't want to include any alpha space. My issue lies with lets say i click some where and it involves a pixel of one of the images. The first issue is how do you work out the pixel location is an object on the map (and not the grass tiles behind). Secondly if i clicked said image, if each image contains its own unique information how do you process the click to load the correct data. Note I don't use libraries I personally prefer the raw method. Relying on libraries doesn't teach me much I find.

    Read the article

  • How to implement Fog Of War with an shader?

    - by Cambrano
    Okay, I'm creating a RTS game and want to implement an AgeOfEmpires-like Fog Of War(FOW). That means a tile(or pixel) can be: 0% transparent (unexplored) 50% transparent black (explored but not in viewrange) 100% transparent(explored and in viewrange) RTS means I'll have many explorers (NPCs, buildings, ...). Okay, so I have an 2d array of bytes byte[,] explored. The byte value correlates the transparency. The question is, how do I pass this array to my shader? Well I think it is not possible to pass an entire array. So: what technique shall I use to let my shader know if a pixel/tile is visible or not?

    Read the article

  • How to implement the light trails for a tron game?

    - by Link
    Well I was creating a TRON style game, but had an issue with creating the actual light trails for the game. What I'm doing currently is I have an array the same size as my window in pixel size, implemented like this: int* collision[800][600]; Then when the bike goes on a certain pixel, it is marked with a 1 for traveled on. However what is the most efficient way to create a working light trail display? I tried to do something like this: int i, j; for(i=0; i<800; i++) for(j=0; j<600; j++) if(*collision[i][j] == 1) Image::applySurface(i, j, trailSurface, gameScreen); But it isn't working properly? It just fills the whole screen with a sprite instead. Whats a better/faster/working way to do this?

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • Create Adventure Game Scene/Room/Backdrop from Real Photo

    - by Lyuben
    Is there a suitable software or a good tutorial for creating 2D rooms/scenery for adventure games from real photos? Is it possible to achieve good results by using photos, or the hand-drawn style will always be the best choice? Thank you! --- EDIT --- I want to clarify that I'm particularly interested in the art creation process, not on the environment in which to build games. I'm writing the game in Java for Android, but I don't think it matters. Also, I'm not trying to decide if the game will have photo realistic rooms or not - I want to achieve 2d pixelated, old-school style background scenes and I wonder if this can be made from photos, because I cannot draw them myself. For example, can I shoot a scene with my camera and then make it look something like the image in the following link: PIXEL ART FOREST I know that I cannot get the same quality as an absolutely hand-drawn pixel, but I'm looking for some decent technology/tutorial/software to make them somewhat similar.

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >