Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 449/1021 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Sprite transparency not effected libGDX

    - by Aon GoltzCrank
    I am making a game using libGDX and Tween Universal Engine. My problem is as follows: I have 2 screens so fars, a splash screen with the logo, and a second one which is the main menu. In the splash screen I use a SpriteBatch and a Sprite with the Texture of the image I want (which goes through some scaling.) Now I use the Tween engine, along with a created SpriteAccessor to control the alpha of the sprite. I fade the picture in, then fade it out, then change it to the next screen. In the next screen I have a single sprite, and a single, 3 slot, sprite array. In this screen I also use the tween engine, I fade the single sprite into the screen (it's the background image) then I try to, using the same method, (Tween.to) to change the alpah of the sprite array (each sprite by itself.), I first set it to 0 using Tween.set, then using the method I change it. This didn't work, after some tests I tried setting the alpha of a single sprite from the array to 0, and that didn't work. It's like the program is ignoring the alpha value, I even printed out the alpha value, it saying 0, but the sprite is visible. How can I fix this, or why might it be caused?

    Read the article

  • How to model interentity membership in entity-component architecture?

    - by croxis
    I'm falling in love with simple grace of entity-component design, although I still have issues breaking from MVC and OOP practices. Some of my game entities have membership relationships with each other (ex: a player is a member of a city, a city is a member of a nation), and I am unsure on the best way to implement it. My initial reaction is to have a a MemberOfCity component that points to the appropriate city component, but components are suppose to have no references to each other. My other option is to have a System do it, but that would require the system to persist data outside of a component. Is there a clean way to do this in an entity-component design, or am I trying to use a hammer on a screw and should use a hybrid/another approach?

    Read the article

  • Why does my 3D model not translate the way I expect? [closed]

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } I'm suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • Shooter in iOS and a visible Aim line before shooting

    - by London2423
    I have to questions. I am trying to develop a game that is iOS but I did it first in my computer so I can tested there. I was able to must of it for PC but I am having a very hard time with iOS port The problem I do have is that I don't know how to shout in iOS. To be more specific how to line render in iOS This is the script I use in my computer using UnityEngine; using System.Collections; public class NewBehaviourScript : MonoBehaviour { LineRenderer line; void Start () { line = gameObject.GetComponent<LineRenderer>(); line.enabled = false; } void Update () { if (Input.GetButtonDown ("Fire1")) { StopCoroutine ("FireLaser"); StartCoroutine ("FireLaser"); } } IEnumerator FireLaser () { line.enabled = true; while (Input.GetButton("Fire1")) { Ray ray = new Ray(transform.position, transform.forward); RaycastHit hit; line.SetPosition (0, ray.origin); if (Physics.Raycast (ray, out hit,100)) { line.SetPosition(1,hit.point); if (hit.rigidbody) { hit.rigidbody.AddForceAtPosition(transform.forward * 5, hit.point); } } else line.SetPosition (1, ray.GetPoint (100)); yield return null; } line.enabled = false; { } } } Which part I have to change for iOS? I already did in the iOS the touch giu event so my player move around in the xcode/Iphone but I need some help with the shouting part. The second part of the question is where I do have to insert or change the script in order to first aim and I DO see the line of aim and then shout. Now the player can only shout. It can not aim at the gameobject, see the the line coming out of the gun aiming at the object and then shout? How I can do that. Everyone tell me Line render but that's what i did Thank you

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Optimal Compression for Speech

    - by ashes999
    I'm designing a game that depends heavily on audio; I will have some 300+ speech files (most of them just a word or two long). This can very quickly escalate the size of my final game. What's the optimal way to encode/compress speech files to keep the size minimal without getting audio artifacts? Please address both per-file compression/encoding, and also zipping/compressing the set of all speech files together in your answer. Because I'm not sure which (or combination of both) factors will give me the best results. Edit: I need this to run in Silverlight and Android, so I'm presumably stuck with only MP3 as my option (other than uncompressed wave files).

    Read the article

  • How to create a reasonably sized urban area manually but efficiently

    - by Overv
    I have a game concept that only really works in an urban area that is of reasonable scale and diversity. In terms of what it should look like, think GTA, in terms of the size think more like a small neighbourhood with residents and a few local shops, perhaps a supermarket. I'm mostly experienced in programming and not at all with modelling, texturing or drawing, but I've found that SketchUp allows me to design interesting looking buildings that I model after real world buildings in my own neighbourhood. Designing these buildings and other objects can take from a few tens of minutes to a few hours. My question is: what is the best approach for a one man army like me who does manage to model buildings to create an interesting city environment in a reasonable amount of time? My game will not be based on procedural generation, the environment will actually be modelled like GTA cities.

    Read the article

  • Implementing Circle Physics in Java

    - by Shijima
    I am working on a simple physics based game where 2 balls bounce off each other. I am following a tutorial, 2-Dimensional Elastic Collisions Without Trigonometry, for the collision reactions. I am using Vector2 from the LIBGDX library to handle vectors. I am a bit confused on how to implement step 6 in Java from the tutorial. Below is my current code, please note that the code strictly follows the tutorial and there are redundant pieces of code which I plan to refactor later. Note: refrences to this refer to ball 1, and ball refers to ball 2. /* * Step 1 * * Find the Normal, Unit Normal and Unit Tangential vectors */ Vector2 n = new Vector2(this.position[0] - ball.position[0], this.position[1] - ball.position[1]); Vector2 un = n.normalize(); Vector2 ut = new Vector2(-un.y, un.x); /* * Step 2 * * Create the initial (before collision) velocity vectors */ Vector2 v1 = this.velocity; Vector2 v2 = ball.velocity; /* * Step 3 * * Resolve the velocity vectors into normal and tangential components */ float v1n = un.dot(v1); float v1t = ut.dot(v1); float v2n = un.dot(v2); float v2t = ut.dot(v2); /* * Step 4 * * Find the new tangential Velocities after collision */ float v1tPrime = v1t; float v2tPrime = v2t; /* * Step 5 * * Find the new normal velocities */ float v1nPrime = v1n * (this.mass - ball.mass) + (2 * ball.mass * v2n) / (this.mass + ball.mass); float v2nPrime = v2n * (ball.mass - this.mass) + (2 * this.mass * v1n) / (this.mass + ball.mass); /* * Step 6 * * Convert the scalar normal and tangential velocities into vectors??? */

    Read the article

  • Annoying flickering of vertices and edges (possible z-fighting)

    - by Belgin
    I'm trying to make a software z-buffer implementation, however, after I generate the z-buffer and proceed with the vertex culling, I get pretty severe discrepancies between the vertex depth and the depth of the buffer at their projected coordinates on the screen (i.e. zbuffer[v.xp][v.yp] != v.z, where xp and yp are the projected x and y coordinates of the vertex v), sometimes by a small fraction of a unit and sometimes by 2 or 3 units. Here's what I think is happening: Each triangle's data structure holds the plane's (that is defined by the triangle) coefficients (a, b, c, d) computed from its three vertices from their normal: void computeNormal(Vertex *v1, Vertex *v2, Vertex *v3, double *a, double *b, double *c) { double a1 = v1 -> x - v2 -> x; double a2 = v1 -> y - v2 -> y; double a3 = v1 -> z - v2 -> z; double b1 = v3 -> x - v2 -> x; double b2 = v3 -> y - v2 -> y; double b3 = v3 -> z - v2 -> z; *a = a2*b3 - a3*b2; *b = -(a1*b3 - a3*b1); *c = a1*b2 - a2*b1; } void computePlane(Poly *p) { double x = p -> verts[0] -> x; double y = p -> verts[0] -> y; double z = p -> verts[0] -> z; computeNormal(p -> verts[0], p -> verts[1], p -> verts[2], &p -> a, &p -> b, &p -> c); p -> d = p -> a * x + p -> b * y + p -> c * z; } The z-buffer just holds the smallest depth at the respective xy coordinate by somewhat casting rays to the polygon (I haven't quite got interpolation right yet so I'm using this slower method until I do) and determining the z coordinate from the reversed perspective projection formulas (which I got from here: double z = -(b*Ez*y + a*Ez*x - d*Ez)/(b*y + a*x + c*Ez - b*Ey - a*Ex); Where x and y are the pixel's coordinates on the screen; a, b, c, and d are the planes coefficients; Ex, Ey, and Ez are the eye's (camera's) coordinates. This last formula does not accurately give the exact vertices' z coordinate at their projected x and y coordinates on the screen, probably because of some floating point inaccuracy (i.e. I've seen it return something like 3.001 when the vertex's z-coordinate was actually 2.998). Here is the portion of code that hides the vertices that shouldn't be visible: for(i = 0; i < shape.nverts; ++i) { double dist = shape.verts[i].z; if(z_buffer[shape.verts[i].yp][shape.verts[i].xp].z < dist) shape.verts[i].visible = 0; else shape.verts[i].visible = 1; } How do I solve this issue? EDIT I've implemented the near and far planes of the frustum, with 24 bit accuracy, and now I have some questions: Is this what I have to do this in order to resolve the flickering? When I compare the z value of the vertex with the z value in the buffer, do I have to convert the z value of the vertex to z' using the formula, or do I convert the value in the buffer back to the original z, and how do I do that? What are some decent values for near and far? Thanks in advance.

    Read the article

  • Creating a level editor event system

    - by Vaughan Hilts
    I'm designing a level editor for game, and I'm trying to create sort of an 'event' system so I can chain together things. If anyone has used RPG Maker, I'm trying to do similar to their system. Right now, I have an 'EventTemplate' class and a bunch of sub-classed 'EventNodes' which basically just contain properties of their data. Orginally, the IAction and IExecute interface performed logic but it was moved into a DLL to share between the two projects. Question: How can I abstract logic from data in this case? Is my model wrong? Isn't cast typing expensive to parse these actions all the time? Should I write a 'Processor' class to execute these all? But then these actions that can do all sorts of things need to interact with all sorts of sub-systems.

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • Designing rules to fight smallpox in Civ-style TBS games

    - by Williham Totland
    TL;DR: How do you design a ruleset for a Civ-style TBS game that prevents city smallpox from being a profitable or viable strategy? Long version: Civ-style games are pretty great. Bringing a civilization from cradle to grave is a great endeavor, and practicing diplomacy with hard-line human players is fun and challenging. In theory. In practice, however, many of these games has, especially in multiplayer, exactly one viable strategy: City smallpox, a.k.a. infinite city spread, a.k.a. covering all available space with 1-citizen cities, packed as tight as they will go. I suppose this could count as emergent gameplay, but still; it could hardly be considered to be in the spirit of the class of game. The Civilization series, of course, is stuck in their more or less fixed rule sets, established with Civilization. Yes, there have been major changes in some respects, but the rules pertaining to city building and maintenance have stayed pretty similar. So the question, then: If you build a ruleset for a TBS from the ground up; what rules should be in place to prevent Infinite City Sprawl from being a viable strategy? Or should ICS be a viable strategy?

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Alternatives to multiple sprite batches for achieving 2D particle system depth

    - by Ergwun
    In my 2D XNA game, I render all my sprites with a single sprite batch using SpriteSortMode.BackToFront and BlendState.AlphaBlend. I'm adding a particle system based on the App Hub particles sample. Since this uses SpriteSortMode.Deferred and BlendState.Additive, I will need to have two SpriteBatch.Begin / SpriteBatch.End pairs: one for 'regular' sprites, and one for particles. In my top-down shooter, If I want to have explosions appear under planes, but above the ground, then I believe I will have to have three Begin/End pairs, first to draw everything under the explosions, then to draw the explosions, then to draw everything above the explosions. If I want to have particle effects at multiple different depths, then I'm going to need even more Begin/Endpairs. This is all easy to code, but I'm wondering if there is an alternative way to handle this?

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

  • Best way to do large XNA animations?

    - by Harold
    What's the best way to have large animations in XNA 4.0? I have created a spritesheet with the sprite being 250x400 (more of an image than a sprite but hey ho) and there are approximately 45 frames in the animation. This causes problems for XNA as it says that the maximum filesize for Reach is 2048. I'd rather not change to hidef as I heard that means that your game is less compatible with some computers and systems so does anyone have any idea what the best thing I could do is? The only thing I could come up with is to have a list of textures to flick through but that's not ideal.

    Read the article

  • Fastest bit-blit in C# ?

    - by AttackingHobo
    I know there is Unity, and XNA that both use C#, but I am don't know what else I could use. The reason I say C# is that the syntax and style is similar to AS3, which I am familiar with, and I want to choose the correct framework to start learning with. What should I use to be able to do the most possible bit-blit(direct pixel copy) objects per frame. EDIT: I should not need to add this, but I am looking for the most possible amount of objects per frame because I am making a few Bullet-Hell SHMUPS. I need thousands and thousands of bullets, particles, and hundreds of enemies on the screen at once. I am looking for a solution to do as many bit-blit operations per frame, I am not looking for a general purpose engine. EDIT2: I want bit-blitting because I do not want to exclude people who have lower end video cards but a fast processor from playing my games.

    Read the article

  • Getting isometric grid coordinates from standard X,Y coordinates

    - by RoryHarvey
    I'm currently trying to add sprites to an isometric Tiled TMX map using Objects in cocos2d. The problem is the X and Y metadata from TMX object are in standard 2d format (pixels x, pixels y), instead of isometric grid X and Y format. Usually you would just divide them by the tile size, but isometric needs some sort of transform. For example on a 64x32 isometric tilemap of size 40 tiles by 40 tiles an object at (20,21)'s coordinates come out as (640,584) So the question really is what formula gets (20,21) from (640,584)?

    Read the article

  • Smooth Camera Zoom Factor Change

    - by Siddharth
    I have game play scene in which user can zoom in and out. For which I used smooth camera in the following manner. public static final int CAMERA_WIDTH = 1024; public static final int CAMERA_HEIGHT = 600; public static final float MAXIMUM_VELOCITY_X = 400f; public static final float MAXIMUM_VELOCITY_Y = 400f; public static final float ZOOM_FACTOR_CHANGE = 1f; mSmoothCamera = new SmoothCamera(0, 0, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT, Constants.MAXIMUM_VELOCITY_X, Constants.MAXIMUM_VELOCITY_Y, Constants.ZOOM_FACTOR_CHANGE); mSmoothCamera.setBounds(0f, 0f, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT); But above thing create problem for me. When user perform zoom in and leave game play scene then other scene behaviour not look good. I already set zoom factor to 1 for this purpose. But now it show camera translation in other scene. Because scene switching time it so much small that player can easily saw translation of camera that I don't want to show. After camera reposition, everything works perfect but how to set camera its proper position. For example my loading text move from bottom to top or vice versa based on camera movement. Any more detail you want then I can able to give you.

    Read the article

  • Best way to load rigid bodies from file

    - by Mel
    I'm trying to switch to bullet for physics simulation. Lemme just say first that I am so pleased with bullet's accuracy and performance. After messing around it for a bit, I'm now trying to load rigid bodies from files. Most of my models are in blender and with some searching, I was able to export them in .bullet format. However, loading the files into bullet doesn't look like an easy task. I've come across this page that points me to a sample application that loads bullet files. But then it goes and says that this loader is just a starting point. Is there any open source library out there that will allow me to load rigid bodies from a file? I don't really wanna spend that much time trying to create my own loader.

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >