Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 529/1758 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Can't get LWJGL lighting to work

    - by Zarkonnen
    I'm trying to enable lighting in lwjgl according to the method described by NeHe and this post. However, no matter what I try, all faces of my shapes always receive the same amount of light, or, in the case of a spinning shape, the amount of lighting seems to oscillate. All faces are lit up by the same amount, which changes as the pyramid rotates. Concrete example (apologies for the length): Note how all panels are always the same brightness, but the brightness varies with the pyramid's rotation. This is using lwjgl 2.8.3 on Mac OS X. package com; import com.zarkonnen.lwjgltest.Main; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.opengl.GL11; import org.newdawn.slick.opengl.Texture; import org.newdawn.slick.opengl.TextureLoader; import org.lwjgl.util.glu.*; import org.lwjgl.input.Keyboard; import java.nio.FloatBuffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; /** * * @author penguin */ public class main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(800, 600)); Display.setTitle("3D Pyramid"); Display.create(); } catch (Exception e) { } initGL(); float rtri = 0.0f; Texture texture = null; try { texture = TextureLoader.getTexture("png", Main.class.getResourceAsStream("tex.png")); } catch (Exception ex) { ex.printStackTrace(); } while (!Display.isCloseRequested()) { // Draw a Triangle :D GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); GL11.glLoadIdentity(); GL11.glTranslatef(0.0f, 0.0f, -10.0f); GL11.glRotatef(rtri, 0.0f, 1.0f, 0.0f); texture.bind(); GL11.glBegin(GL11.GL_TRIANGLES); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); GL11.glBegin(GL11.GL_QUADS); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); Display.update(); rtri += 0.05f; // Exit-Key = ESC boolean exitPressed = Keyboard.isKeyDown(Keyboard.KEY_ESCAPE); if (exitPressed) { System.out.println("Escape was pressed!"); Display.destroy(); } } Display.destroy(); } private static void initGL() { GL11.glEnable(GL11.GL_LIGHTING); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, ((float) 800) / ((float) 600), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0f); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); float lightAmbient[] = {0.5f, 0.5f, 0.5f, 1.0f}; // Ambient Light Values float lightDiffuse[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Diffuse Light Values float lightPosition[] = {0.0f, 0.0f, 2.0f, 1.0f}; // Light Position ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glLight(GL11.GL_LIGHT1, GL11.GL_AMBIENT, (FloatBuffer) temp.asFloatBuffer().put(lightAmbient).flip()); // Setup The Ambient Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_DIFFUSE, (FloatBuffer) temp.asFloatBuffer().put(lightDiffuse).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_POSITION, (FloatBuffer) temp.asFloatBuffer().put(lightPosition).flip()); // Position The Light GL11.glEnable(GL11.GL_LIGHT1); // Enable Light One } }

    Read the article

  • Transformation matrix that maps a window

    - by gbhall
    I'm currently learning OpenGL at uni, and they give us questions to help us learn (these are not worth anything), however I'm stuck on this one question and would have to travel over an hour and a half to uni for an answer. How do I do this question? Please include as many steps as you can, I want to be able to follow exactly how to do this. Find the transformation that maps a window whose lower left corner is at (1,1) and upper right corner is at (3,5) onto: The entire device screen whose dimension is (600, 500) A viewport that has lower left corner at (100,100) and upper right corner at (400,400) Edit: Damn sorry I should have added I am meant to find the matrix, so no code.

    Read the article

  • Incomplete mesh using DrawIndexedPrimitives after rotating mesh

    - by user1278255
    Through help on this site I was able to draw the triangles of an unrotated, nonscaled nontransformed mesh created in Blender and exported to OBJ, accurately imported through Assimp and rendered in XNA Graphics. However after applying rotation on a single axis in Blender(Z) and adding materials(I wanted to test loading of materials through Assimp) the same mesh appears incomplete. Is something wrong with my view matrix or is it something else? This is what the unrotated mesh looks like: http://www.4shared.com/photo/qXNUSvxtba/okcube.html Here is the rotated mesh: http://www.4shared.com/photo/HAys2rWvba/badcube.html Camera, View and Projection are defined as follows: cameraPos = new Vector3(0, 5, 9); viewMatrix = Matrix.CreateLookAt(cameraPos, new Vector3(0, 0, 1), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, device.Viewport.AspectRatio, 1.0f, 200.0f); Rendering is done through this code: device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect = new BasicEffect(GraphicsDevice); effect.VertexColorEnabled = true; effect.View = viewMatrix; effect.Projection = projectionMatrix; effect.World = Matrix.Identity; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(Microsoft.Xna.Framework.Graphics.PrimitiveType.TriangleList, 0, 0, oScene.Meshes[0].VertexCount, 0, mMesh.FaceCount); } base.Draw(gameTime);

    Read the article

  • Cocos2d copied actions not responding?

    - by Stephen
    I am running an animation on 2 sprites like so: -(void) startFootballAnimation { CCAnimation* footballAnim = [CCAnimation animationWithFrame:@"Football" frameCount:60 delay:0.005f]; spiral = [CCAnimate actionWithAnimation:footballAnim]; CCRepeatForever* repeat = [CCRepeatForever actionWithAction:spiral]; [self runAction:repeat]; [secondFootball runAction:[[repeat copy] autorelease]]; } The problem I am having is I call this method: - (void) slowAnimation { [spiral setDuration:[spiral duration] + 0.01]; } and it only slows down the first sprites animation and not the second one. Do I need to do something different with copied actions to get them to react to the slowing of the animation?

    Read the article

  • XNA Skinned Animated Mesh Rendering Exported from Maya

    - by Devin Garner
    I am working on translating an old RTS game engine I wrote from DirectX9 to XNA. My old models didn't have animation & are an old format, so I'm trying with an FBX file. I temporarily "borrowed" a model from League of Legends just to test if my rendering is working correctly. I imported the mesh/bones/skin/animation into Maya 2012 using an "unnamed" 3rd-party import tool. (obviously I'll have to get legit models later, but I just want to test if my programming is correct). Everything looks correct in maya and it renders the animations flawlessly. I exported everything into a single FBX file (with only a single animation). I then tried to load this model using the example at the following site: http://create.msdn.com/en-US/education/catalog/sample/skinned_model With my exported FBX, the animation looks correct for most of the frames, however at random times it screws up for a split second. Basically, the body/arms/head will look right, but the leg/foot will shoot out to a random point in space for a second & then go back to the normal position. The original FBX from the sample looks correct in my program. It seems odd that my model was imported into maya wrong, since it displays fine in Maya. So, I'm thinking either I'm exporting it wrong, or the sample code is bad & the model from the sample caters to the samples bad code. I'm new to 3D programming & maya, so chances are I'm doing something wrong in the export. I'm using mostly the defaults, but I've tried all 3 interpolation modes (quaternion, euler, resample). Thanks

    Read the article

  • A*, Tile costs and heuristic; How to approach

    - by Kevin Toet
    I'm doing exercises in tile games and AI to improve my programming. I've written a highly unoptimised pathfinder that does the trick and a simple tile class. The first problem i ran into was that the heuristic was rounded to int's which resulted in very straight paths. Resorting a Euclidian Heuristic seemed to fixed it as opposed to use the Manhattan approach. The 2nd problem I ran into was when i tried added tile costs. I was hoping to use the value's of the flags that i set on the tiles but the value's were too small to make the pathfinder consider them a huge obstacle so i increased their value's but that breaks the flags a certain way and no paths were found anymore. So my questions, before posting the code, are: What am I doing wrong that the Manhatten heuristic isnt working? What ways can I store the tile costs? I was hoping to (ab)use the enum flags for this The path finder isnt considering the chance that no path is available, how do i check this? Any code optimisations are welcome as I'd love to improve my coding. public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map ) { return FindPath( startTile, endTile, map, TileFlags.WALKABLE ); } public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map, TileFlags acceptedFlags ) { List<Tile> open = new List<Tile>(); List<Tile> closed = new List<Tile>(); open.Add( startTile ); Tile tileToCheck; do { tileToCheck = open[0]; closed.Add( tileToCheck ); open.Remove( tileToCheck ); for( int i = 0; i < tileToCheck.neighbors.Count; i++ ) { Tile tile = tileToCheck.neighbors[ i ]; //has the node been processed if( !closed.Contains( tile ) && ( tile.flags & acceptedFlags ) != 0 ) { //Not in the open list? if( !open.Contains( tile ) ) { //Set G int G = 10; G += tileToCheck.G; //Set Parent tile.parentX = tileToCheck.x; tile.parentY = tileToCheck.y; tile.G = G; //tile.H = Math.Abs(endTile.x - tile.x ) + Math.Abs( endTile.y - tile.y ) * 10; //TODO omg wtf and other incredible stories tile.H = Vector2.Distance( new Vector2( tile.x, tile.y ), new Vector2(endTile.x, endTile.y) ); tile.Cost = tile.G + tile.H + (int)tile.flags; //Calculate H; Manhattan style open.Add( tile ); } //Update the cost if it is else { int G = 10;//cost of going to non-diagonal tiles G += map[ tile.parentX, tile.parentY ].G; //If this path is shorter (G cost is lower) then change //the parent cell, G cost and F cost. if ( G < tile.G ) //if G cost is less, { tile.parentX = tileToCheck.x; //change the square's parent tile.parentY = tileToCheck.y; tile.G = G;//change the G cost tile.Cost = tile.G + tile.H + (int)tile.flags; // add terrain cost } } } } //Sort costs open = open.OrderBy( o => o.Cost).ToList(); } while( tileToCheck != endTile ); closed.Reverse(); List<Tile> validRoute = new List<Tile>(); Tile currentTile = closed[ 0 ]; validRoute.Add( currentTile ); do { //Look up the parent of the current cell. currentTile = map[ currentTile.parentX, currentTile.parentY ]; currentTile.renderer.material.color = Color.green; //Add tile to list validRoute.Add( currentTile ); } while ( currentTile != startTile ); validRoute.Reverse(); return validRoute; } And my Tile class: [Flags] public enum TileFlags: int { NONE = 0, DIRT = 1, STONE = 2, WATER = 4, BUILDING = 8, //handy WALKABLE = DIRT | STONE | NONE, endofenum } public class Tile : MonoBehaviour { //Tile Properties public int x, y; public TileFlags flags = TileFlags.DIRT; public Transform cachedTransform; //A* properties public int parentX, parentY; public int G; public float Cost; public float H; public List<Tile> neighbors = new List<Tile>(); void Awake() { cachedTransform = transform; } }

    Read the article

  • Trying to detect collision between two polygons using Separating Axis Theorem

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(vertices[0]); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(vertices[i]); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if( other.x <= y || other.y >= x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • Learning to optimize with Assembly

    - by niktehpui
    I am a second year student of Computer Games Technology. I recently finished my first prototype of my "kind" of own pathfinder (that doesn't use A* instead a geometrical approach/pattern recognition, the pathfinder just needs the knowledge about the terrain that is in his view to make decisions, because I wanted an AI that could actually explore, if the terrain is already known, then it will walk the shortest way easily, because the pathfinder has a memory of nodes). Anyway my question is more general: How do I start optimizing algorithms/loops/for_each/etc. using Assembly, although general tips are welcome. I am specifically looking for good books, because it is really hard to find good books on this topic. There are some small articles out there like this one, but still isn't enough knowledge to optimize an algorithm/game... I hope there is a modern good book out there, that I just couldn't find...

    Read the article

  • Marketing: Angry Birds - How it's done

    - by John
    Why do some apps, like Angry Birds, dominate the market while other cool/fun/addicting apps are never heard of? I'm trying to figure out the best marketing strategy, or best way to sell an app to mass market. Does anybody have any ideas or things they noticed about the marketing of major blockbuster apps, like Angry Birds, why they get so popular and stay at the top of charts. Thanks for any ideas, comments ...

    Read the article

  • what tool should I use for drawing 2d OpenGL shapes?

    - by Kenny Winker
    I'm working on a very simple OpenGL ES 2.0 game, and I'm not sure what tool to use to create the vertex data I need. My first thought was adobe illustrator, but I can't seem to google up any info on how to convert an .ai file to vertices. I'm only going to be using very simple 2d shapes so I wonder if I need to use a 3d modelling program? How is this typically done, when you are working with 2d non-sprite shapes?

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • What's the proper way to calculate probability for a card game?

    - by Milan Babuškov
    I'm creating AI for a card game, and I run into problem calculating the probability of passing/failing the hand when AI needs to start the hand. Cards are A, K, Q, J, 10, 9, 8, 7 (with A being the strongest) and AI needs to play to not take the hand. Assuming there are 4 cards of the suit left in the game and one is in AI's hand, I need to calculate probability that one of the other players would take the hand. Here's an example: AI player has: J Other 2 players have: A, K, 7 If a single opponent has AK7 then AI would lose. However, if one of the players has A or K without 7, AI would survive. Now, looking at possible distribution, I have: P1 P2 AI --- --- --- AK7 loses AK 7 survives A7 K survives K7 A survives A 7K survives K 7A survives 7 KA survives AK7 loses Looking at this, it seems that there is 75% chance of survival. However, I skipped the permutations that mirror the ones from above. It should be the same, but somehow when I write them all down, it seems that chance is only 50%: P1 P2 AI --- --- --- AK7 loses A7K loses K7A loses KA7 loses 7AK loses 7KA loses AK 7 survives A7 K survives K7 A survives KA 7 survives 7A K survives 7K A survives A K7 survives A 7K survives K 7A survives K A7 survives 7 AK survives 7 KA survives AK7 loses A7K loses K7A loses KA7 loses 7AK loses 7KA loses 12 loses, 12 survivals = 50% chance. Obviously, it should be the same (shouldn't it?) and I'm missing something in one of the ways to calculate. Which one is correct?

    Read the article

  • Handling Players, enemies and attacks in HTML5

    - by Chris Morris
    I'm building a simple (currently) game with free roaming player and monsters on a map built by a 2D grid. I've been looking at the methods for implementing characters and enemies onto the screen and I've seen two seperate methods for doing this online. Drawing the player onto the screen canvas directly and refreshing the entire screen every FPS tick. Having a separate canvas to handle the player and moving the player canvas on top of the screen canvas via absolute positioning. I can see some pros and cons of both methods but what is generally the best method for doing this? I assume the second due to not having to drain resources by refreshing the map when the user is not moving, but the type of game will generally have constant movement.

    Read the article

  • How to handle a player's level and its consequent privileges?

    - by Songo
    I'm building a game similar to Mafia Wars where a player can do tasks for his gang and gain experience and thus advancing his level. The game is built using PHP and a Mysql database. In the game I want to limit the resources allowed to player based on his level. For example: ________| (Max gold) | (Max army size) | (Max moves) | ... Level 1 | 1000 | 100 | 10 | ... Level 2 | 1500 | 200 | 20 | ... Level 3 | 3000 | 300 | 25 | ... . . . In addition certain features of the game won't be allowed until a certain level is reached such as players under Level 10 can't trade in the game market, players under Level 20 can't create alliances,...etc. The way I have modeled it is by implementing a very loooong ACL (Access Control List) with about 100 entries (an entry for each level). However, I think there may be a simpler approach to this seeing that this feature have been implemented in many games before.

    Read the article

  • (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D)

    - by OstlerDev
    I am trying to create a render target for my game so that I can re-render at a different screen size. But I am receiving the following error: Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method: // clear screen GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); // Start FBO Rendering Code // The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName = GL30.glGenFramebuffers(); GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, FramebufferName); // The texture we're going to render to int renderedTexture = glGenTextures(); // "Bind" the newly created texture : all future texture functions will modify this texture glBindTexture(GL_TEXTURE_2D, renderedTexture); // Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); // Poor filtering. Needed ! glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Set "renderedTexture" as our colour attachement #0 GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, renderedTexture, 0); // Set the list of draw buffers. IntBuffer drawBuffer = BufferUtils.createIntBuffer(20 * 20); GL20.glDrawBuffers(drawBuffer); // Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) != GL30.GL_FRAMEBUFFER_COMPLETE){ System.out.println("Framebuffer was not created successfully! Exiting!"); return; } // Resets the current viewport GL11.glViewport(0, 0, scaleWidth*scale, scaleHeight*scale); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); // let subsystem paint if (callback != null) { callback.frameRendering(); } // update window contents Display.update(); It is crashing on this line: glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome.

    Read the article

  • How attach a model with another model on a specific bone?

    - by Mehdi Bugnard
    I meet a difficulty attached to a model to another model on a "bone" accurate. I searched several forums but no result. I saw that many people have asked the same question but no real result see no response. Thread found : How to attach two XNA models together? How can I attach a model to the bone of another model? http://stackoverflow.com/questions/11391852/attach-model-xna But I think it is possible. Here is my code example attached a "cube" of the hand of my player private void draw_itemActionAttached(Model modelInUse) { Matrix[] Model1TransfoMatrix = new Matrix[this.player.Model.Bones.Count]; this.player.Model.CopyAbsoluteBoneTransformsTo(Model1TransfoMatrix); foreach (ModelMesh mesh in modelInUse.Meshes) { foreach (BasicEffect effect in mesh.Effects) { Matrix model2Transform = Matrix.CreateScale(1f) * Matrix.CreateFromYawPitchRoll(0, 0, 0); effect.World = model2Transform * Model1TransfoMatrix[0]; //root bone index effect.View = arcadia.camera.View; effect.Projection = arcadia.camera.Projection; } mesh.Draw(); } }

    Read the article

  • Can I run into legal issues with random names?

    - by Nathan Sabruka
    I'm currently building a game whose NPC's are going to be assigned a random gender and a random name for the right gender. To do this I will be using a "database" of names (actually a text file with tuples). There would also be a list of last names, which will be added to the first name also randomly. My question is the following. Suppose one such random name is "George Bush", and this person has been randomly assigned the job of president. As you can see, this could easily be seen as having been "copied" from a real-life person. The main issue is this. Names will be randomly-generated, yes, but the seed for random-number generation will be constant. In other words, the name of an NPC would be randomly-generated, i.e. I wouldn't choose it, but it would be the same for every player. Could this get me in trouble? We cannot verify all possible names, since the generated number of NPC's could be potentially limitless (new NPC's are being created whenever needed).

    Read the article

  • loading 3d model data into buffers

    - by mulletdevil
    I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure. I am using C++ and DirectX9 Thank you, Mark

    Read the article

  • Are there any alternative JS ports of Box2D?

    - by Petteri Hietavirta
    I have been thinking about creating a top down 2D car game for HTML5. For my first game I wrote the physics and collisions my self but for this one I would like to use some ready made library. I found out Box2D and its JS port. http://box2d-js.sourceforge.net It seems to be quite old port, made in 2008. Is it lacking many features of current Box2D or does it have major issues with it? And are there any alternatives for it?

    Read the article

  • Trouble with touch events on iPhone

    - by MrDatabase
    I'm making a simple 2D game for iPhone. Think of the game as a ball on the screen that goes up while the user is touching the screen and falls down when the user stops touching the screen. The ball starts moving up in touchesBegan:withEvent and starts moving down in touchesEnded:withEvent. This works fine almost all the time. However on occasion the ball will keep moving up after the user stops touching... or the ball will keep moving down while the user is touching. Why is this happening? Just fyi the ball is drawn on a UIWindow. The taps are handled by a UIImageview subclass that's clearColor and takes up the entire screen. This "touchLayer" is also moved to the front of the window in the game loop. Any idea why this control scheme occasionally fails? Perhaps the touch events just aren't firing? Or they're fired out of order? Cheers!

    Read the article

  • Tiled/TMX C++ Library/Parser

    - by Ben
    Where can I find an easy to use and up to date C++ parser/library for the .tmx map format (used by the Tiled Map Editor) ? EDIT: David's comment, 'Unless you want to build your game around the format of the parser..', got me thinking... So I have downloaded pugixml, which is an easy to use xml-parser with very straightforward documentation. Together with the spec for the TMX Map Format, I think I'll give it a try myself. I'll probably compare with Cocos2d-x's CCTMXTiledMap at some point.

    Read the article

  • How can I cull non-visible isometric tiles?

    - by james
    I have a problem which I am struggling to solve. I have a large map of around 100x100 tiles which form an isometric map. The user is able to move the map around by dragging the mouse. I am trying to optimize my game only to draw the visible tiles. So far my code is like this. It appears to be ok in the x direction, but as soon as one tile goes completely above the top of the screen, the entire column disappears. I am not sure how to detect that all of the tiles in a particular column are outside the visible region. double maxTilesX = widthOfScreen/ halfTileWidth + 4; double maxTilesY = heightOfScreen/ halfTileHeight + 4; int rowStart = Math.max(0,( -xOffset / halfTileWidth)) ; int colStart = Math.max(0,( -yOffset / halfTileHeight)); rowEnd = (int) Math.min(mapSize, rowStart + maxTilesX); colEnd = (int) Math.min(mapSize, colStart + maxTilesY); EDIT - I think I have solved my problem, but perhaps not in a very efficient way. I have taken the center of the screen coordinates, determined which tile this corresponds to by converting the coordinates into cartesian format. I then update the entire box around the screen.

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug?

    Read the article

  • UDK ParticleSystem Problem

    - by EmAdpres
    I use below code to create my particles ( in fact, I don't know any other way.) spawnedParticleComponents = WorldInfo.MyEmitterPool.SpawnEmitter(ParticleSystem'ParticleName', Location, Rotator ); spawnedParticleComponents.setTranslation(newLocation); ... And unfortunately, I spawn many particles in game. When I play my game, after some time, I see Exceeded max active pooled emitters! Warning in console . To solve the problem, first, I tried spawnedParticleComponents.DeactivateSystem();, But it doesn't help. Then I try WorldInfo.MyEmitterPool.ClearPoolComponents(false);, But it doesn't either . :( How can I destroy many spawned particles and avoid this warning ?

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >