Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 482/1414 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Simultaneous AI in turn based games

    - by Eduard Strehlau
    I want to hack together a roguelike. Now I thought about entity and world representation and got to a quite big problem. If you want all the AI to act simultaneously you would normally(in cellular automa for examble) just copy the cell buffer and let all action of indiviual cells depend on the copy. Actions which are not valid anymore after some cell before the cell you are currently operating on changed the original enviourment(blocking the path) are just ignored or reapplied with the "current"(between turns) environment. After all cells have acted you copy the current map to the buffer again. Now for an environment with complex AI and big(datawise) entities the copying would take too long. So I thought you could put every action and entity makes into a que(make no changes to the environment) and execute the whole que after everyone took their move. Every interaction on this que are realy interacting entities, so if a entity tries to attack another entity it sends a message to it, the consequences of the attack would be visible next turn, either by just examining the entity or asking the entity for data. This would remove problems like what happens if an entity dies middle in the cue but got actions or is messaged later on(all messages would go to null, and the messages from the entity would either just be sent or deleted(haven't decided yet) But what would happen if a monster spawns a fireball which by itself tracks the player(in the same turn). Should I add the fireball to the enviourment beforehand, so make a change to the environment before executing the action list or just add the ball to the "need updated" list as a special case so it doesn't exist in the environment and still operates on it, spawing after evaluating the action list? Are there any solutions or papers on this subject which I can take a look at? EDIT: I don't need information on writing a roguelike I need information on turn based ai in respective to a complex enviourment.

    Read the article

  • How to balance a non-symmetric "extension" based game?

    - by Klaim
    Most strategy games have fixed units and possible behaviours. However, think of a game like Magic The Gathering : each card is a set of rules. Regularly, new sets of card types are created. I remember that the firsts editions of the game have been said to be prohibited in official tournaments because the cards were often too powerful. Later extensions of the game provided more subtle effects/rules in cards and they managed to balance the game apparently effectively, even if there is thousands of different cards possible. I'm working on a strategy game that is a bit in the same position : every units are provided by extensions and the game is thought to be extended for some years, at least. The effects variety of the units are very large even with some basic design limitations set to be sure it's manageable. Each player choose a set of units to play with (defining their global strategy) before playing (like chooseing a themed deck of Magic cards). As it's a strategy game (you can think of Magic as a strategy game too in some POV), it's essentially skirmish based so the game have to be fair, even if the players don't choose the same units before starting to play. So, how do you proceed to balance this type of non-symmetric (strategy) game when you know it will always be extended? For the moment, I'm trying to apply those rules but I'm not sure it's right because I don't have enough design experience to know : each unit would provide one unique effect; each unit should have an opposite unit that have an opposite effect that would cancel each others; some limitations based on the gameplay; try to get a lot of beta tests before each extension release? Looks like I'm in the most complex case?

    Read the article

  • Unity3D - Projection matrix camera frustum

    - by MulletDevil
    I've used off centre projection to create a custom projection matrix for my camera. When I run the game I can see the scene correctly in the game view but in the editor view the camera frustum is not correct. It still shows the original frustum shape not the new one. It also appears that Unity is using the original frustum for frustum culling and not the new one as I can see object being culled which are visible to the new frustum but would not be visible in the old one. Am I wrong in thinking that a custom projection matrix would alter the view frustum? Or am I missing something else?

    Read the article

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • Importing Models from Maya to OpenGL

    - by Mert Toka
    I am looking for ways to import models to my game project. I am using Maya as modelling software, and GLUT for windowing of my game. I found this great parser, it imports all the textures and normal vectors, but it is compatible with .obj files of 3dsMAX. I tried to use it with Maya obj's, and it turned out that Maya's obj files are a bit different from former one, thus it cannot parse them. If you know any way to convert Maya obj files to 3dsMax obj files, that would be acceptable as well as a new parser for Maya obj files.

    Read the article

  • Why does my terrain turn white when I get close to it?

    - by Starkers
    When I zoom in on my terrain it goes white: The further in I zoom, the greater the whiteness becomes. Is this normal? Is this to speed up rendering or something? Can I turn it off? I'm also getting these error messages in the console over and over again: rc.right != m_GfxWindow-GetWidth() || rc.bottom != m_GfxWindow-GetHeight() and GUI Window tries to begin rendering while something else has not finished rendering! Either you have a recursive OnGUI rendering, or previous OnGUI did not clean up properly. Does this bear any correlation on the issue? Update I create virtual desktops to flit between using the program Deskpot. Turning this program off and restarting has stopped the above errors appearing in the console. However, I still get white terrain when I zoom in. Not a single error message. I've restarted my computer to no avail. I have an Asus NVidia GeForce GTX 760 2GB DDR5 Direct CU II OC Edition Graphics Card. Any known issues? Update I don't think it's fog...

    Read the article

  • Collision filtering techniques

    - by Griffin
    I was wondering what efficient techniques are out there for mapping collision filtering between various bodies, sub-bodies, and so forth. I'm familiar with the simple idea of having different layers of 2D bodies, but this is not sufficient for more complex mapping: (Think of having sub-bodies of a body, such as limbs, collide with each other by placing them on the same layer, and then wanting to only have the legs collide with the ground while the arms would not) This can be solved with a multidimensional layer setup, but I would probably end up just creating more and more layers to the point where the simplicity and efficiency of layer filtering would be gone. Are there any more complex ways to solve even more complex situations than this?

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • Collision detection of convex shapes on voxel terrain

    - by Dave
    I have some standard convex shapes (cubes, capsules) on a voxel terrain. It is very easy to detect single vertex collisions. However, it becomes computationally expensive when many vertices are involved. To clarify, currently my algorithm represents a cube as multiple vertices covering every face of the cube, not just the corners. This is because the cubes can be much bigger than the voxels, so multiple sample points (vertices) are required (the distance between sample points must be at least the width of a voxel). This very rapidly becomes intractable. It would be great if there were some standard algorithm(s) for collision detection between convex shapes and arbitrary voxel based terrain (like there is with OBB's and seperating axis theorem etc). Any help much appreciated.

    Read the article

  • OpenGL ES Basic Fragment Shader help with transparency

    - by Chris
    I have just spent my first half hour playing with the shader language. I have modified the basic program I have which renders the texture, to allow me to colour the texture. varying vec2 texCoord; uniform sampler2D texSampler; /* Given the texture coordinates, our pixel shader grabs the corresponding * color from the texture. */ void main() { //gl_FragColor = texture2D(texSampler, texCoord); gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).xyz,1); } I have noticed how this affects my transparent textures, and I believe I am loosing the alpha channel which would explain why previously transparent area's appear totally black. If I use the following line instead, I am shown the transparent area's gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).aaa,1); How can I retain the transparency after this modification to the colour? I have seen various things about a .w property, and also luminous, but my tweaks with those and the .aaa property are not working XD

    Read the article

  • Detecting long held keys on keyboard

    - by Robinson Joaquin
    I just want to ask if can I check for "KEY"(keyboard) that is HOLD/PRESSED for a long time, because I am to create a clone of breakout with air hockey for 2 different human players. Here's the list of my concern: Do I need other/ 3rd party library for KEY HOLDS? Is multi-threading needed? I don't know anything about this multi-threading stuff and I don't think about using one(I'm just a NEWBIE). One more thing, what if the two players pressed their respective key at the same time, how can I program to avoid error or worse one player's key is prioritized first before the the key of the other. example: Player 1 = W for UP & S for DOWN Player 2 = O for UP & L for DOWN (example: W & L is pressed at the same time) PS: I use GLUT for the visuals of the game.

    Read the article

  • Corona SDK: Quality of support and resources?

    - by Nick Wiggill
    I've not used Corona SDK before and am looking into it for a friend. (He is also considering Unity.) I wonder what the support is like for Corona? While Unity has a great many customers and so the Unity team can often not address issues directly, the community at large is very helpful and there are many excellent resources: tutorials, forum posts, code resources. What is Corona like in this regard, and by comparison?

    Read the article

  • Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or playery increments by one at every iteration until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or playery keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • Registering InputListener in libGDX

    - by JPRO
    I'm just getting started with libGDX and have run into a snag registering an InputListener for a button. I've gone through many examples and this code appears correct to me but the associated callback never triggers ("touched" is not printed to console). I'm just posting the code with the abstract game screen and the implementing screen. The application starts successfully with a label of "Exit" in the bottom left hand corner, but clicking the button/label does nothing. I'm guessing the fix is something simple. What am I overlooking? public abstract class GameScreen<T> implements Screen { protected final T game; protected final SpriteBatch batch; protected final Stage stage; public GameScreen(T game) { this.game = game; this.batch = new SpriteBatch(); this.stage = new Stage(0, 0, true); } @Override public final void render(float delta) { update(delta); // Clear the screen with the given RGB color (black) Gdx.gl.glClearColor(0f, 0f, 0f, 1f); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stage.act(delta); stage.draw(); } public abstract void update(float delta); @Override public void resize(int width, int height) { stage.setViewport(width, height, true); } @Override public void show() { Gdx.input.setInputProcessor(stage); } // hide, pause, resume, dipose } public class ExampleScreen extends GameScreen<MyGame> { private TextButton exitButton; public ExampleScreen(MyGame game) { super(game); } @Override public void show() { super.show(); TextButton.TextButtonStyle buttonStyle = new TextButton.TextButtonStyle(); buttonStyle.font = Font.getFont("Origicide", 32); buttonStyle.fontColor = Color.WHITE; exitButton = new TextButton("Exit", buttonStyle); exitButton.addListener(new InputListener() { @Override public void touchUp (InputEvent event, float x, float y, int pointer, int button) { System.out.println("touched"); } }); stage.addActor(exitButton); } @Override public void update(float delta) { } }

    Read the article

  • Cannot compute wNear and wFar from projection matrix

    - by DeadMG
    I've got the following error from Direct3D when attempting to render in 3D: Direct3D9: (WARN) :Cannot compute WNear and WFar from the supplied projection matrix Direct3D9: (WARN) :Setting wNear to 0.0 and wFar to 1.0 My projection matrix is as follows: D3DXMatrixPerspectiveFovLH( &Projection, D3DXToRadian(90), (float)GetDimensions().x / (float)GetDimensions().y, NearPlane, FarPlane ); D3DCALL(device->SetTransform( D3DTS_PROJECTION, &Projection )); The NearPlane is 0.1f, the FarPlane is 40.0f, and the dimensions are 1920x1018. This code was working earlier but I appear to have broken it, and I'm not sure where the fault is. Previously I've only encountered it if NearPlane was 0, and Google hasn't suggested any other causes either. Any suggestions?

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Game Center: Leaderboard score inconsistencies

    - by Hasyimi Bahrudin
    Background I'm currently developing a simple library that mirrors Game Center's functionalities locally. Basically, this library is a system that manages achievements and leaderboards, and optionally sync it with the Game Center. So, if the game is not GC enabled, the game will still have achievements and leaderboards (stored inside a plist). But of course, the leaderboards will then only contain the local player's scores (which is kind of useless, I know :P). Problem Currently I have coded both of the achievements and leaderboards subsystems. The achievements subsystem have already been tested and it works. I'm currently testing the leaderboards subsystem using multiple test user accounts. I loaded the test app on a device and on the simulator, both logged in with 2 different user accounts. Then I performed these steps: I first used the device to upload a score. Then, I ran the simulator, and the score submitted by the user on the device is shown. Which is cool. Then, I used the simulator to upload a score. But on the device, still, only one score is listed. I checked on the Game Center app (to see if the bug lies within my code), and I got the same thing. Under "All players", there is only one score on the device, but there are 2 scores on the simulator. I wanted to make sure that the simulator is not causing this, so I swapped the users on the device and the simulator, and the result is still the same. In other words, the first user is oblivious of the second user's score, but the second user can see the first user's score. Then I tried with a third user. The result: the third user can only see the scores of the first user and himself. The second user still sees the scores of the first user and himself. The first user only sees his own score. Now here comes the weird part. I then make the first user and the second user befriend each other. The result: under "Friends", the first user can see the second user's score, but under "All Players", the first user's score is the only one listed. Screenshots The first user sees this: The second user sees this: So, is this a normal thing when using sandboxed GC accounts? Is this behavior documented somewhere by Apple?

    Read the article

  • Applying effects to an existing program that uses BasicEffect

    - by Fibericon
    Using the finished product from the tutorial here. Is it possible to apply the grayscale effect from here: Making entire scene fade to grayscale Or would you basically have to rewrite everything? EDIT: It's doing something now, but the whole grayscale seems extremely blue. It's like I'm looking at it through dark blue sunglasses. Here's my draw function: protected override void Draw(GameTime gameTime) { device.SetRenderTarget(renderTarget); graphics.GraphicsDevice.Clear(Color.CornflowerBlue); //Drawing models, bullets, etc. device.SetRenderTarget(null); spriteBatch.Begin(0, BlendState.Additive, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, grayScale); Texture2D temp = (Texture2D)renderTarget; grayScale.Parameters["coloredTexture"].SetValue(temp); grayScale.CurrentTechnique = grayScale.Techniques["Grayscale"]; foreach (EffectPass pass in grayScale.CurrentTechnique.Passes) { pass.Apply(); } spriteBatch.Draw(temp, new Vector2(GraphicsDevice.PresentationParameters.BackBufferWidth/2, GraphicsDevice.PresentationParameters.BackBufferHeight/2), null, Color.White, 0f, new Vector2(renderTarget.Width/2, renderTarget.Height/2), 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } Another edit: figured out what I was doing wrong. I have Blendstate.Additive in the spriteBatch.Draw() call. It should be Blendstate.Opaque, or it literally tries to add the blank blue image to the grayscale image.

    Read the article

  • How do I efficiently code both the client and server at the same time?

    - by liamzebedee
    I'm coding my game using a client-server model. When playing on singleplayer, the game starts a local server, and interacts with it just like a remote server (multiplayer). I have done this to avoid coding separate singleplayer and multiplayer code. I have just started coding and have encountered a major problem. Currently I'm developing the game in Eclipse, having all the game classes organized into packages. Then, in my server code, I just use all the classes in the client packages. The problem is, these client classes have variables that are specific to rendering, which obviously wouldn't be performed on a server. Should I create modified versions of the client classes to use in the server? Or should I just modify the client classes with a boolean, to indicate if its the client/server using it. Are there any other options I have? I just had a thought about maybe using the server class as the core class, then extending it with rendering stuff?

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • Fog shader camera problem

    - by MaT
    I have some difficulties with my vertex-fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f { float4 pos : SV_POSITION; float4 grabUV : TEXCOORD0; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; float4 screenPos : TEXCOORD3; }; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_depth = v.texcoord.xy; o.grabUV = ComputeGrabScreenPos(o.pos); half index = v.vertex.z; o.screenPos = ComputeScreenPos(o.pos); o.interpolatedRay = mul(UNITY_MATRIX_MV, v.vertex); return o; } sampler2D _GrabTexture; float4 frag(v2f IN) : COLOR { float3 uv = UNITY_PROJ_COORD(IN.grabUV); float dpth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv)); dpth = LinearEyeDepth(dpth); float4 wsPos = (IN.screenPos + dpth * IN.interpolatedRay); // Here is the problem but how to fix it float fogVert = max(0.0, (wsPos.y - _Depth) * (_DepthScale * 0.1f)); fogVert *= fogVert; fogVert = (exp (-fogVert)); return fogVert; } Thanks a lot !

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Convert rotation from Right handed System to left handed

    - by Hector Llanos
    I have Euler angles from a right handed system that I am trying to convert to a left handed system. All the information that I have read online says that to convert it simply multiply the axis and the angle in the correct order and it should work. In other words, Z * Y * X. When I do this what I see in Maya, and in engine still do not match up. This is what I have so far: static Quaternion ConvertToRightHand(Vector3 Euler) { Quaternion x = Quaternion.AngleAxis(-Euler.x, Vector3.right); Quaternion y = Quaternion.AngleAxis(Euler.y, Vector3.up); Quaternion z = Quaternion.AngleAxis(Euler.z, Vector3.forward); return (z * y * x); } Keeping the -Euler.x helps keep the object pointing up correctly, but when I pass ( 0,0,0) to face in the -z, it faces in the +z. Help :/

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >