Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 401/1285 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • iOS - pass UIImage to shader as texture

    - by martin pilch
    I am trying to pass UIImage to GLSL shader. The fragment shader is: varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform sampler2D inputImageTexture2; void main() { highp vec4 color = texture2D(inputImageTexture, textureCoordinate); highp vec4 color2 = texture2D(inputImageTexture2, textureCoordinate); gl_FragColor = color * color2; } What I want to do is send images from camera and do multiply blend with texture. When I just send data from camera, everything is fine. So problem should be with sending another texture to shader. I am doing it this way: - (void)setTexture:(UIImage*)image forUniform:(NSString*)uniform { CGSize sizeOfImage = [image size]; CGFloat scaleOfImage = [image scale]; CGSize pixelSizeOfImage = CGSizeMake(scaleOfImage * sizeOfImage.width, scaleOfImage * sizeOfImage.height); //create context GLubyte * spriteData = (GLubyte *)malloc(pixelSizeOfImage.width * pixelSizeOfImage.height * 4 * sizeof(GLubyte)); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, pixelSizeOfImage.width, pixelSizeOfImage.height, 8, pixelSizeOfImage.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); //draw image into context CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, pixelSizeOfImage.width, pixelSizeOfImage.height), image.CGImage); //get uniform of texture GLuint uniformIndex = glGetUniformLocation(__programPointer, [uniform UTF8String]); //generate texture GLuint textureIndex; glGenTextures(1, &textureIndex); glBindTexture(GL_TEXTURE_2D, textureIndex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //create texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixelSizeOfImage.width, pixelSizeOfImage.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textureIndex); //"send" to shader glUniform1i(uniformIndex, 1); free(spriteData); CGContextRelease(spriteContext); } Uniform for texture is fine, glGetUniformLocation function do not returns -1. The texture is PNG file of resolution 2000x2000 pixels. PROBLEM: When the texture is passed to shader, I have got "black screen". Maybe problem are parameters of the CGContext or parameters of the function glTexImage2D Thank you

    Read the article

  • Slick and Timers?

    - by user3491043
    I'm making a game where I need events to happen in a precise amount of time. Explanation : I want that event A happens at 12000ms, and event B happens every 10000ms. So "if"s should looks like this. //event A if(Ticks == 12000) //do things //even B if(Ticks % 10000 == 0) //do stuff But now how can I have this "Ticks" value ? I tried to declare an int and then increasing it in the update method, I tried 2 ways of increasing it : Ticks++; It doesn't works because the update method is not always called every microseconds. Ticks += delta; It's kinda good but the delta is not always equals to 1, so I can miss the precise values I need in the if statements So if you know how can I do events in a precise amount of time please tell me how can I do this

    Read the article

  • Slow Firefox Javascript Canvas Performance?

    - by jujumbura
    As a followup from a previous post, I have been trying to track down some slowdown I am having when drawing a scene using Javascript and the canvas element. I decided to narrow down my focus to a REALLY barebones animation that only clears the canvas and draws a single image, once per-frame. This of course runs silky smooth in Chrome, but it still stutters in Firefox. I added a simple FPS calculator, and indeed it appears that my page is typically getting an FPS in the 50's when running Firefox. This doesn't seem right to me, I must be doing something wrong here. Can anybody see anything I might be doing that is causing this drop in FPS? <!DOCTYPE HTML> <html> <head> </head> <body bgcolor=silver> <canvas id="myCanvas" width="600" height="400"></canvas> <img id="myHexagon" src="Images/Hexagon.png" style="display: none;"> <script> window.requestAnimFrame = (function(callback) { return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.oRequestAnimationFrame || window.msRequestAnimationFrame || function(callback) { window.setTimeout(callback, 1000 / 60); }; })(); var animX = 0; var frameCounter = 0; var fps = 0; var time = new Date(); function animate() { var canvas = document.getElementById("myCanvas"); var context = canvas.getContext("2d"); context.clearRect(0, 0, canvas.width, canvas.height); animX += 1; if (animX == canvas.width) { animX = 0; } var image = document.getElementById("myHexagon"); context.drawImage(image, animX, 128); context.lineWidth=1; context.fillStyle="#000000"; context.lineStyle="#ffffff"; context.font="18px sans-serif"; context.fillText("fps: " + fps, 20, 20); ++frameCounter; var currentTime = new Date(); var elapsedTimeMS = currentTime - time; if (elapsedTimeMS >= 1000) { fps = frameCounter; frameCounter = 0; time = currentTime; } // request new frame requestAnimFrame(function() { animate(); }); } window.onload = function() { animate(); }; </script> </body> </html>

    Read the article

  • How to achieve a Gaussian Blur effect for shadows in LWJGL/Slick2D?

    - by user46883
    I am currently trying to implement shadows into my game, and after a lot of searching in the interwebs I came to the conclusion that drawing hard edged shadows to a low resolution pass combined with a Gaussian blur effect would fit best and make a good compromise between performance and looks - even though theyre not 100% physically accurate. Now my problem is, that I dont really know how to implement the Gaussian blur part. Its not difficult to draw shadows to a low resolutions buffer image and then stretch it which makes it more smooth, but I need to add the Gaussian blur effect. I have searched a lot on that and found some approachs for GLSL, some even here, but none of them really helped it. My game is written in Java using Slick2D/LWJGL and I would appreciate any help or approaches for an algorithm or maybe even an existing library to achieve that effect. Thanks for any help in advance.

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • Errors compiling XNA project Windows 8?

    - by ChocoMan
    I'm using Visual Studio 2012 have just installed Windows 8 on my computer and tried to compile a game Im working on in XNA. When the game tried to build, I got the following errors: Error 12 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\SkyDome\skycirrus01.xnb" because it was not found. Error 13 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\Fonts\Arial.xnb" because it was not found. Error 14 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\Fonts\ISOCP2.xnb" because it was not found. skycirrus01.xnb is actually a .fbx. *Arial.xmb* and ISOCP2.xmb are my spritefonts within my project. Prior to installing Windows 8 (store bought) my project compiled. Does anyone know how to convert these to .xnb files? I'm assuming that will make them compatible.

    Read the article

  • Procedural landscape generation but not just fractals

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than cities plotted on the land based on the resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards?

    Read the article

  • Turn-based Strategy Loop

    - by Djentleman
    I'm working on a strategy game. It's turn-based and card-based (think Dominion-style), done in a client, with eventual AI in the works. I've already implemented almost all of the game logic (methods for calculations and suchlike) and I'm starting to work on the actual game loop. What is the "best" way to implement a game loop in such a game? Should I use a simple "while gameActive" loop that keeps running until gameActive is False, with sections that wait for player input? Or should it be managed through the UI with player actions determining what happens and when? Any help is appreciated. I'm doing it in Python (for now at least) to get my Python skills up a bit, although the language shouldn't matter for this question.

    Read the article

  • Converting from different handedness coordinate systems

    - by SirYakalot
    I am currently porting a demo from XNA to DirectX which, as I understand it, both have coordinate systems with different handednesses. What are the things I need to bare in mind when converting between the two? I understand not everything needs to be changed. Also I notice that many of the 3D maths functions in some of the direct3D libraries have right handed and left handed alternatives. Would it be better to just use these?

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance update: is for high performance, but isn't released yet. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • Is finding graph minors without single node pinch points possible?

    - by Alturis
    Is it possible to robustly find all the graph minors within an arbitrary node graph where the pinch points are generally not single nodes? I have read some other posts on here about how to break up your graph into a Hamiltonian cycle and then from that find the graph minors but it seems to be such an algorithm would require that each "room" had "doorways" consisting of single nodes. To explain a bit more a visual aid is necessary. Lets say the nodes below are an example of the typical node graph. What I am looking for is a way to automatically find the different colored regions of the graph (or graph minors)

    Read the article

  • Vertex Normals, Loading Mesh Data

    - by Ramon Johannessen
    My test FBX mesh is a cube. From what I surmise, it seems that the cube is on the extreme end of this issue, but I believe that the same issue would be able to occur in any mesh: Each vertex has 3 normals, each pointing a different direction. Of course loading in any type of mesh, potentially ones having thousands of vertices, I need to use indices and not duplicate shared verts. Currently, I'm just writing the normals to the vertex at the index that the FBX data tells me they go to, which has the effect of overwriting any previous normal data. But for lighting calculations I need more info, something that's equivalent to a normal per face, but I have no idea how this should be done. Do I average the 3 different verts' normals together or what?

    Read the article

  • How to make custom shaped holes in terrain

    - by Guy Ben-Moshe
    So I'm trying to create a game where you fit certain shaped objects into the hole that fits them (similar to the young children's game with different shaped blocks) in Unity 3D. I've encountered a problem, how do I make the holes in the terrain? Or what type of object should I use for making holes in? So far I've tried to make a 3d model in unity by using other cubes and planes, it just doesn't feel right. I guess I need to create a model in another software and import to unity, I don't know any other software I can use. Tips would help.

    Read the article

  • How can I make a game like doodlejump XNA c#

    - by Ramy
    I wanted to know how can I make the background scroll down like doodlejump. I have a game made and I have to transform it so it's like doodle jump, but I'm wonder how or where to look so I can make he background keep moving as in progressing through the background till let's say the character dies. namespace IFM20884 { using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; public abstract class BackgroundScroll : Sprite { private float speedOfBackground = 0.2f; // speed that the background moves public BackgroundScroll (GraphicsDeviceManager graphics) : base(graphics.GraphicsDevice.Viewport.Width / 2f, graphics.GraphicsDevice.Viewport.Height / 2f) { } //Getter public float speedOfBackground { get { return this.speedOfBackground ; } set { this.speedOfBackground = value; } } public override void Update(GameTime gameTime, GraphicsDeviceManager graphics) { //Makes background go down. ForcePosition(Position.X, Position.Y + (gameTime.ElapsedGameTime.Milliseconds * this.speedOfBackground )); if (Position.Y - (Height / 2) > graphics.GraphicsDevice.Viewport.Height) { ForcePosition(Position.X, Position.Y - this.Height); } } public override void Draw(SpriteBatch spriteBatch) { ForcePosition(Position.X, Position.Y - this.Height); base.Draw(spriteBatch); ForcerPosition(Position.X, Position.Y + this.Height); base.Draw(spriteBatch); } } }

    Read the article

  • genetic algorithm for leveling/build test

    - by Renan Malke Stigliani
    I'm starting o build a online PVP (duel like, one-to-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I never did anything like that, I'm still thinking about the maths behind the level/skill/special balances. So I thought good way of testing the best/combo builds would implement a Genetic Algorith. It'd be like that: Generate a big portion of random characters Make them fight, level them up accordingly to the victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try to make even best characters Add some more random chars, emulating new players Repeat the process for some time, or util find some chars who can beat everyone butts So I could play with the math and try to find the balance where the top x% chars would be a mix of various build types. So, is it a good idea, or there are some other easier method to do the balance? PS: I like this also, because it sounds funny

    Read the article

  • Collision detection on a 2D hexagonal grid

    - by SundayMonday
    I'm making a casual grid-based 2D iPhone game using Cocos2D. The grid is a "staggered" hex-like grid consisting of uniformly sized and spaced discs. It looks something like this. I've stored the grid in a 2D array. Also I have a concept of "surrounding" grid cells. Namely the six grid cells surrounding a particular cell (except those on the boundries which can have less than six). Anyways I'm testing some collision detection and it's not working out as well as I had planned. Here's how I currently do collision detection for a moving disc that's approaching the stationary group of discs: Calculate ij-coordinates of grid cell closest to moving cell using moving cell's xy-position Get list of surrounding grid cells using ij-coordinates Examine the surrounding cells. If they're all empty then no collision If we have some non-empty surrounding cells then compare the distance between the disc centers to some minimum distance required for a collision If there's a collision then place the moving disc in grid cell ij So this works but not too well. I've considered a potentially simpler brute force approach where I just compare the moving disc to all stationary discs at each step of the game loop. This is probably feasible in terms of performance since the stationary disc count is 300 max. If not then some space-partitioning data structure could be used however that feels too complex. What are some common approaches and best practices to collision detection in a game like this?

    Read the article

  • Blender to Collada to Assimp - Rigid (Non-skinned) Animation

    - by gareththegeek
    I am trying to get simple animations to work, exporting from Blender and importing into my application. My first attempt was as follows: Open Blender at factory settings. Select the default cube and insert a location keyframe. Select another frame and move the cube. Insert a second location keyframe. Export to Collada. When I open the Collada file using assimp it contains zero animations, even though in Blender the cube animates correctly. On my next attempt, I inserted a bone armature with a single bone, made it the parent of the cube, and animated the bone instead. Again the animation worked correctly in Blender. Assimp now lists one animation but both key frames have the position [0, 0, 0] Does anyone have any suggestions as to how I can get animated (non-skinned) meshes from Blender into Assimp? My ultimate goal here is to export animated meshes from Blender, process them offline into my own model format, and load them into my SharpDX based graphics engine..

    Read the article

  • SSAO Distortion

    - by Robert Xu
    I'm currently (attempting) to add SSAO to my engine, except it's...not really work, to say the least. I use a deferred renderer to render my scene. I have four render targets: Albedo, Light, Normal, and Depth. Here are the parameters for all of them (Surface Format, Depth Format): Albedo: 32-bit ARGB, Depth24Stencil8 Light: 32-bit ARGB, None Normal: 32-bit ARGB, None Depth: 8-bit R (Single), Depth24Stencil8 To generate my random noise map for the SSAO, I do the following for each pixel in the noise map: Vector3 v3 = Vector3.Zero; double z = rand.NextDouble() * 2.0 - 1.0; double r = Math.Sqrt(1.0 - z * z); double angle = rand.NextDouble() * MathHelper.TwoPi; v3.X = (float)(r * Math.Cos(angle)); v3.Y = (float)(r * Math.Sin(angle)); v3.Z = (float)z; v3 += offset; v3 *= 0.5f; result[i] = new Color(v3); This is my GBuffer rendering effect: PixelInput RenderGBufferColorVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = mul(input.Position, WorldViewProjection); pi.Normal = mul(input.Normal, WorldInverseTranspose); pi.Color = input.Color; pi.TPosition = pi.Position; pi.WPosition = input.Position; return pi; } GBufferTarget RenderGBufferColorPixelShader(PixelInput input) { GBufferTarget output = ( GBufferTarget ) 0; float3 position = input.TPosition.xyz / input.TPosition.w; output.Albedo = lerp(float4(1.0f, 1.0f, 1.0f, 1.0f), input.Color, ColorFactor); output.Normal = EncodeNormal(input.Normal); output.Depth = position.z; return output; } And here is the SSAO effect: float4 EncodeNormal(float3 normal) { return float4((normal.xyz * 0.5f) + 0.5f, 0.0f); } float3 DecodeNormal(float4 encoded) { return encoded * 2.0 - 1.0f; } float Intensity; float Size; float2 NoiseOffset; float4x4 ViewProjection; float4x4 ViewProjectionInverse; texture DepthMap; texture NormalMap; texture RandomMap; const float3 samples[16] = { float3(0.01537562, 0.01389096, 0.02276565), float3(-0.0332658, -0.2151698, -0.0660736), float3(-0.06420016, -0.1919067, 0.5329634), float3(-0.05896204, -0.04509097, -0.03611697), float3(-0.1302175, 0.01034653, 0.01543675), float3(0.3168565, -0.182557, -0.01421785), float3(-0.02134448, -0.1056605, 0.00576055), float3(-0.3502164, 0.281433, -0.2245609), float3(-0.00123525, 0.00151868, 0.02614773), float3(0.1814744, 0.05798516, -0.02362876), float3(0.07945167, -0.08302628, 0.4423518), float3(0.321987, -0.05670302, -0.05418307), float3(-0.00165138, -0.00410309, 0.00537362), float3(0.01687791, 0.03189049, -0.04060405), float3(-0.04335613, -0.00530749, 0.06443053), float3(0.8474263, -0.3590308, -0.02318038), }; sampler DepthSampler = sampler_state { Texture = DepthMap; MipFilter = Point; MinFilter = Point; MagFilter = Point; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler NormalSampler = sampler_state { Texture = NormalMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler RandomSampler = sampler_state { Texture = RandomMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; struct VertexInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; struct PixelInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; PixelInput SSAOVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = input.Position; pi.TextureCoordinates = input.TextureCoordinates; return pi; } float3 GetXYZ(float2 uv) { float depth = tex2D(DepthSampler, uv); float2 xy = uv * 2.0f - 1.0f; xy.y *= -1; float4 p = float4(xy, depth, 1); float4 q = mul(p, ViewProjectionInverse); return q.xyz / q.w; } float3 GetNormal(float2 uv) { return DecodeNormal(tex2D(NormalSampler, uv)); } float4 SSAOPixelShader(PixelInput input) : COLOR0 { float depth = tex2D(DepthSampler, input.TextureCoordinates); float3 position = GetXYZ(input.TextureCoordinates); float3 normal = GetNormal(input.TextureCoordinates); float occlusion = 1.0f; float3 reflectionRay = DecodeNormal(tex2D(RandomSampler, input.TextureCoordinates + NoiseOffset)); for (int i = 0; i < 16; i++) { float3 sampleXYZ = position + reflect(samples[i], reflectionRay) * Size; float4 screenXYZW = mul(float4(sampleXYZ, 1.0f), ViewProjection); float3 screenXYZ = screenXYZW.xyz / screenXYZW.w; float2 sampleUV = float2(screenXYZ.x * 0.5f + 0.5f, 1.0f - (screenXYZ.y * 0.5f + 0.5f)); float frontMostDepthAtSample = tex2D(DepthSampler, sampleUV); if (frontMostDepthAtSample < screenXYZ.z) { occlusion -= 1.0f / 16.0f; } } return float4(occlusion * Intensity * float3(1.0, 1.0, 1.0), 1.0); } technique SSAO { pass Pass0 { VertexShader = compile vs_3_0 SSAOVertexShader(); PixelShader = compile ps_3_0 SSAOPixelShader(); } } However, when I use the effect, I get some pretty bad distortion: Here's the light map that goes with it -- is the static-like effect supposed to be like that? I've noticed that even if I'm looking at nothing, I still get the static-like effect. (you can see it in the screenshot; the top half doesn't have any geometry yet it still has the static-like effect) Also, does anyone have any advice on how to effectively debug shaders?

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • What is wrong with my Dot Product? [Javascript]

    - by Clay Ellis Murray
    I am trying to make a pong game but I wanted to use dot products to do the collisions with the paddles, however whenever I make a dot product objects it never changes much from .9 this is my code to make vectors vector = { make:function(object){ return [object.x + object.width/2,object.y + object.height/2] }, normalize:function(v){ var length = Math.sqrt(v[0] * v[0] + v[1] * v[1]) v[0] = v[0]/length v[1] = v[1]/length return v }, dot:function(v1,v2){ return v1[0] * v2[0] + v1[1] * v2[1] } } and this is where I am calculating the dot in my code vector1 = vector.normalize(vector.make(ball)) vector2 = vector.normalize(vector.make(object)) dot = vector.dot(vector1,vector2) Here is a JsFiddle of my code currently the paddles don't move. Any help would be greatly appreciated

    Read the article

  • Unity custom shaders and z-fighting

    - by Heisenbug
    I've just readed a chapter of Unity iOS Essential by Robert Wiebe. It shows a solution for handling z-figthing problem occuring while rendering a street on a plane with the same y offset. Basically it modified Normal-Diffuse shader provided by Unity, specifing the (texture?) offset in -1, -1. Here's basically what the shader looks like: Shader "Custom/ModifiedNormalDiffuse" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _MainTex ("Base (RGB)", 2D) = "white" {} } SubShader { Offset -1,-1 //THIS IS THE ADDED LINE Tags { "RenderType"="Opaque" } LOD 200 CGPROGRAM #pragma surface surf Lambert sampler2D _MainTex; fixed4 _Color; struct Input { float2 uv_MainTex; }; void surf (Input IN, inout SurfaceOutput o) { half4 c = tex2D (_MainTex, IN.uv_MainTex) *_Color; o.Albedo = c.rgb; o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } Ok. That's simple and it works. The author says about it: ...we could use a copy of the shader that draw the road at an Offset of -1, -1 so that whenever the two textures are drawn, the road is always drawn last. I don't know CG nor GLSL, but I've a little bit of experience with HLSL. Anyway I can't figure out what exactly is going on. Could anyone explain me what exactly Offset directly does, and how is solves z-fighting problems?

    Read the article

  • Please help with bounding box/sprite collision in darkBASIC pro

    - by user1601163
    So I just recently learned BASIC and figured I would try making a clone of pong on my own in darkBASIC pro, and I made everything else work just fine except for the part that makes the ball bounce off the paddle. And yes I'm aware that the game is not yet finished. The error is on lines 39-51 EVERYTHING IS 2D. /////////////////////////////////////////////////////////// // // Project: Pong // Created: Friday, August 31, 2012 // Code: Brandon Spaulding // Art: Brandon Spaulding // Made in CIS lab at CPAVTS // Pong art and code © Brandon Spaulding 2012-2013 // ////////////////////////////////////////////////////////// y=150 x=0 ay=150 ax=612 ballx=300 bally=200 ballx_DIR=1 bally_DIR=1 hide mouse set global collision on //objectnumber=10 //make object box objectnumber,5,150,0 do load image "media\paddle1.png",1 load image "media\paddle2.png",2 load image "media\ball.png",3 sprite 1,x,y,1 sprite 2,ax,ay,2 sprite 3,ballx,bally,3 if upkey()=1 then y = y - 4 if downkey()=1 then y = y + 4 //num_1 = sprite collision(1,0) //num_2 = sprite collision(2,0) num_3 = sprite collision(3,0) for t=1 to 2 //ball&paddle collision if num_3 > 0 if bally_DIR=1 bally_DIR=0 else bally_DIR=1 endif if ballx_DIR=0 ballx_DIR=1 else ballx_DIR=0 endif endif //if bally > 1 and bally < 500 then bally=bally + 2.5 if bally_DIR=1 bally=bally-2.5 if bally<-2.5 bally_DIR=0 endif else bally=bally+2.5 if bally>452.5 bally_DIR=1 endif endif if ballx_DIR=1 ballx=ballx-2.5 if ballx<-2.5 ballx_DIR=0 endif else ballx=ballx+2.5 if ballx>612 ballx_DIR=1 endif endif //bally = bally + t //if bally < 600 or bally > 1 then bally = bally - 2.5 //if ballx < 400 or ballx > 1 then ballx = ballx + 2.5 //move sprite 3,1 next t if escapekey()=1 then exit loop end Thank you in advance for the help.

    Read the article

  • Spritesheet per pixel collision XNA

    - by Jixi
    So basically i'm using this: public bool IntersectPixels(Rectangle rectangleA, Color[] dataA,Rectangle rectangleB, Color[] dataB) { int top = Math.Max(rectangleA.Top, rectangleB.Top); int bottom = Math.Min(rectangleA.Bottom, rectangleB.Bottom); int left = Math.Max(rectangleA.Left, rectangleB.Left); int right = Math.Min(rectangleA.Right, rectangleB.Right); for (int y = top; y < bottom; y++) { for (int x = left; x < right; x++) { Color colorA = dataA[(x - rectangleA.Left) + (y - rectangleA.Top) * rectangleA.Width]; Color colorB = dataB[(x - rectangleB.Left) + (y - rectangleB.Top) * rectangleB.Width]; if (colorA.A != 0 && colorB.A != 0) { return true; } } } return false; } In order to detect collision, but i'm unable to figure out how to use it with animated sprites. This is my animation update method: public void AnimUpdate(GameTime gameTime) { if (!animPaused) { animTimer += (float)gameTime.ElapsedGameTime.TotalMilliseconds; if (animTimer > animInterval) { currentFrame++; animTimer = 0f; } if (currentFrame > endFrame || endFrame <= currentFrame || currentFrame < startFrame) { currentFrame = startFrame; } objRect = new Rectangle(currentFrame * TextureWidth, frameRow * TextureHeight, TextureWidth, TextureHeight); origin = new Vector2(objRect.Width / 2, objRect.Height / 2); } } Which works with multiple rows and columns. and how i call the intersect: public bool IntersectPixels(Obj me, Vector2 pos, Obj o) { Rectangle collisionRect = new Rectangle(me.objRect.X, me.objRect.Y, me.objRect.Width, me.objRect.Height); collisionRect.X += (int)pos.X; collisionRect.Y += (int)pos.Y; if (IntersectPixels(collisionRect, me.TextureData, o.objRect, o.TextureData)) { return true; } return false; } Now my guess is that i have to update the textureData everytime the frame changes, no? If so then i already tried it and miserably failed doing so :P Any hints, advices? If you need to see any more of my code just let me know and i'll update the question. Updated almost functional collisionRect: collisionRect = new Rectangle((int)me.Position.X, (int)me.Position.Y, me.Texture.Width / (int)((me.frameCount - 1) * me.TextureWidth), me.Texture.Height); What it does now is "move" the block up 50%, shouldn't be too hard to figure out. Update: Alright, so here's a functional collision rectangle(besides the height issue) collisionRect = new Rectangle((int)me.Position.X, (int)me.Position.Y, me.TextureWidth / (int)me.frameCount - 1, me.TextureHeight); Now the problem is that using breakpoints i found out that it's still not getting the correct color values of the animated sprite. So it detects properly but the color values are always: R:0 G:0 B:0 A:0 ??? disregard that, it's not true afterall =P For some reason now the collision area height is only 1 pixel..

    Read the article

  • How to break terrain (in blender) into Chunks for a game engine

    - by Red
    I've created an island in blender. 2048x2048 blender units. The engine developer wants me to split the terrain into 128x128 "chunks" so that would be 16x16 "chunks" from a top down view. The engine isn't using height maps, in order to allow caves, 3d overhangs, etc. I'm not in charge of the engine, so suggestions about that aren't needed here :/ I just need a good, seamless way to split a terrain mesh into 16x16 pieces without leaving holes. I'm very new to blender, so baby steps are very much welcomed :) Here's a quick render of what i've got so far. I added a plane to show sea level but i've since removed it. http://i.imgur.com/qTsoC.png

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >