Search Results

Search found 3789 results on 152 pages for 'pixel tracking'.

Page 47/152 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • What collision detection approach for top down car game?

    - by nathan
    I have a quite advanced top down car game and i use masks to detect collisions. I have the actual designed track (what the player see) with fancy graphics etc. and two other pictures i use as mask for my detection collisions. Each mask has only two colors, white and black and i check each frame if a pixel of the car collide with a black pixel of the masks. This approach works of course but it's not really flexible. Whenever i want to change the look of a track, i have to redraw the mask and it's a real pain. What is the general approach for this kind of game? How can i improve the flexibility of such a mask based approach?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Zooming options terminology

    - by Mark
    I've come up with 4 different ways to fit an image inside a viewing region, but I'm trouble coming up with names for them. Perhaps someone can suggest some? Fit image in viewing region, do not enlarge if image is smaller Size image so it fits snuggly inside the viewing region (enlarge if necessary) -- the image is as large as possible while still fitting within the viewing region Size image so that it fills the entire viewing region -- the image will be the same size or bigger than the viewing region 1:1 ratio; 1 pixel in the image corresponds to 1 pixel on screen All zooming options maintain aspect ratio. Stretching is just ugly, so it's not an option :)

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Oversizing images to produce better looking pages?

    - by Joannes Vermorel
    In the past, improper image resizing used to be a big no-no of web design (not mentioning improper compression format). Hence, for years I have been sticking to the policy where images (PNG or JPG) are resized on the server to match the resolution pixel-wise they will have with the rendered page. Now, recently, I hastily designed a HTML draft with oversized images, using inline CSS style such as width:123px and height:123px to resize the images. To my (slight) surprise, the page turned out to look much better that way. Indeed, with better screen resolution, some people (like me), tend to browse with some level of zoom (aka 125% or even 150% zoom), otherwise fonts are just too small on-screen. Then, if the image is strictly sized, the enlarged image appears blurry (pixel interpolation effect), but if the image is oversized the results is much better. Obviously, oversizing images is not an acceptable pattern if your website is intended for mobile browsing, but is there case where it would be considered as acceptable? Especially if the extra page weight is small anyway.

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • HTML5 clicking objects in canvas

    - by Dave
    I have a function in my JS that gets the user's mouse click on the canvas. Now lets say I have a random shape on my canvas (really its a PNG image which is rectangular) but i don't want to include any alpha space. My issue lies with lets say i click some where and it involves a pixel of one of the images. The first issue is how do you work out the pixel location is an object on the map (and not the grass tiles behind). Secondly if i clicked said image, if each image contains its own unique information how do you process the click to load the correct data. Note I don't use libraries I personally prefer the raw method. Relying on libraries doesn't teach me much I find.

    Read the article

  • How to implement Fog Of War with an shader?

    - by Cambrano
    Okay, I'm creating a RTS game and want to implement an AgeOfEmpires-like Fog Of War(FOW). That means a tile(or pixel) can be: 0% transparent (unexplored) 50% transparent black (explored but not in viewrange) 100% transparent(explored and in viewrange) RTS means I'll have many explorers (NPCs, buildings, ...). Okay, so I have an 2d array of bytes byte[,] explored. The byte value correlates the transparency. The question is, how do I pass this array to my shader? Well I think it is not possible to pass an entire array. So: what technique shall I use to let my shader know if a pixel/tile is visible or not?

    Read the article

  • How to implement the light trails for a tron game?

    - by Link
    Well I was creating a TRON style game, but had an issue with creating the actual light trails for the game. What I'm doing currently is I have an array the same size as my window in pixel size, implemented like this: int* collision[800][600]; Then when the bike goes on a certain pixel, it is marked with a 1 for traveled on. However what is the most efficient way to create a working light trail display? I tried to do something like this: int i, j; for(i=0; i<800; i++) for(j=0; j<600; j++) if(*collision[i][j] == 1) Image::applySurface(i, j, trailSurface, gameScreen); But it isn't working properly? It just fills the whole screen with a sprite instead. Whats a better/faster/working way to do this?

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Create Adventure Game Scene/Room/Backdrop from Real Photo

    - by Lyuben
    Is there a suitable software or a good tutorial for creating 2D rooms/scenery for adventure games from real photos? Is it possible to achieve good results by using photos, or the hand-drawn style will always be the best choice? Thank you! --- EDIT --- I want to clarify that I'm particularly interested in the art creation process, not on the environment in which to build games. I'm writing the game in Java for Android, but I don't think it matters. Also, I'm not trying to decide if the game will have photo realistic rooms or not - I want to achieve 2d pixelated, old-school style background scenes and I wonder if this can be made from photos, because I cannot draw them myself. For example, can I shoot a scene with my camera and then make it look something like the image in the following link: PIXEL ART FOREST I know that I cannot get the same quality as an absolutely hand-drawn pixel, but I'm looking for some decent technology/tutorial/software to make them somewhat similar.

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Things to do to port game made for iOS in Unity to Android?

    - by 2600th
    I have just made my first game for iOS and submitted it to app store. I was thinking of porting my game to Android also. I would like to know things one need to do/remember to port game made for iOS in Unity to Android. How to handle different screen resolutions and pixel densities, optimizations required, etc. Any other suggestions and important things you think I should know? EDIT: Also, should I handle builds according to device resolutions or by pixel density?

    Read the article

  • Without using a pre-built physics engine, how can I implement 3-D collision detection from scratch?

    - by Andy Harglesis
    I want to tackle some basic 3-D collision detection and was wondering how engines handle this and give you a pretty interface and make it so easy ... I want to do it all myself, however. 2-D collision detection is extremely simple and can be done multiple ways that even beginner programmers could think up: 1.When the pixels touch; 2.when a rectangle range is exceeded; 3.when a pixel object is detected near another one in a pixel-based rendering engine. But 3-D is different with one dimension, but complex in many more so ... what are the general, basic understanding/examples on how 3-D collision detection can be implemented? Think two shaded, OpenGL cubes that are moved next to each other with a simple OpenGL rendering context and keyboard events.

    Read the article

  • Java graphic objects as in flashgames

    - by Ryu Kajiya
    How is it possible (with the standard Java2D engine) to use small sprites like graphic objects? For those who don't know what I mean, in all those Flash-games like on Facebook they put small sprites on the screen which react to mouse-over and clicks. I tried to do the same in Java but can't find a good method. Swing components always spread over the whole bitmap, but I only want to get a reaction from the object when the mouse is over a pixel that's not transparent. So basically checking every time if the object below the mouse contains a non-transparent pixel (which i believe could be pretty intense in a gameloop or repaint loop). I have no idea how to implement such a thing efficiently.

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >