Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 379/1042 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • GLSL: How Do I cast a float into an int?

    - by dugla
    In a GLSL fragment shader I am trying to cast a float into an int. The compiler has other ideas. It complains thusly: ERROR: 0:60: '=' : cannot convert from 'mediump float' to 'highp int' I am trying to do this: mediump float indexf = floor(2.0 * mixer); highp int index = indexf; I (vainly) tried to raise the precision of the int above the float to appease the GL Gods but no joy. Could someone please school me here? Thanks, Doug

    Read the article

  • How can I implement collision detection for these tiles?

    - by Fiona
    I am wondering how this would be possible, if at all. In the image below: http://i.stack.imgur.com/d8cO3.png The light brows tiles are ground, while the dark brown is background, so the player can pass over those tiles. Here's the for loops that draws the level: float scale = 1f; for (row = 0; row < currentLevel.Rows; row++) { for (column = 0; column < currentLevel.Columns; column++) { Tile tile = (Tile)currentLevel.GetTile(row, column); if (tile == null) { continue; } Texture2D texture = tile.Texture; spriteBatch.Draw(texture, new Microsoft.Xna.Framework.Rectangle( (int)(column * currentLevel.CellSize.X * scale), (int)(row * currentLevel.CellSize.Y * scale), (int)(currentLevel.CellSize.X * scale), (int)(currentLevel.CellSize.Y * scale)), Microsoft.Xna.Framework.Color.White); } } Here's what I have so far to determine where to create a Rectangle: Microsoft.Xna.Framework.Rectangle[,,,] groundBounds = new Microsoft.Xna.Framework.Rectangle[?, ?, ?, ?]; int tileSize = 20; int screenSizeInTiles = 30; var tilePositions = new System.Drawing.Point[screenSizeInTiles, screenSizeInTiles]; for (int x = 0; x < screenSizeInTiles; x++) { for (int y = 0; y < screenSizeInTiles; y++) { tilePositions[x, y] = new System.Drawing.Point(x * tileSize, y * tileSize); groundBounds[x, y, tileSize, tileSize] = new Microsoft.Xna.Framework.Rectangle(x, y, 20, 20); } } First off, I'm not sure how to initialize the array groundBounds (I don't know how big to make it). Also, I'm not entirely sure how to go about adding information to groundBounds. I want to add a Rectangle for each tile in the level. Preferably I'd only make a Rectangle for those tiles accessible by the player, and not background tiles, but that's for a different day. FYI, the map was made with a freeware program called Realm Factory.

    Read the article

  • How to design a replay system

    - by daddz
    So how would I design a replay system? You may know it from certain games like Warcraft 3 or Starcraft where you can watch the game again after it has been played already. You end up with a relatively small replay file. So my questions are: How to save the data? (custom format?) (small filesize) What shall be saved? How to make it generic so it can be used in other games to record a time period (and not a complete match for example)? Make it possible to forward and rewind (WC3 couldn't rewind as far as I remember)

    Read the article

  • Literature for Inverse Kinematics: Joint Limits and beyond

    - by Jeff
    Recently I've been playing around with Inverse Kinematics and have been pretty impressed with the results. Naturally I want to take it further, but have no clue where to start. In particular, I would like to introduce joint limits (ie for a prismatic joint how far it can move, hinge joint what angles it has to be between, etc etc). Currently I understand how to produce the Jacobian matrix for the various joint types. I am particularly looking for literature (preferably free, and preferably easy to understand) on various ways to implement joint limits. Also I would like to find out different ideas on how inverse kinematics can be used.

    Read the article

  • Diamond-square terrain generation problem

    - by kafka
    I've implemented a diamond-square algorithm according to this article: http://www.lighthouse3d.com/opengl/terrain/index.php?mpd2 The problem is that I get these steep cliffs all over the map. It happens on the edges, when the terrain is recursively subdivided: Here is the source: void DiamondSquare(unsigned x1,unsigned y1,unsigned x2,unsigned y2,float range) { int c1 = (int)x2 - (int)x1; int c2 = (int)y2 - (int)y1; unsigned hx = (x2 - x1)/2; unsigned hy = (y2 - y1)/2; if((c1 <= 1) || (c2 <= 1)) return; // Diamond stage float a = m_heightmap[x1][y1]; float b = m_heightmap[x2][y1]; float c = m_heightmap[x1][y2]; float d = m_heightmap[x2][y2]; float e = (a+b+c+d) / 4 + GetRnd() * range; m_heightmap[x1 + hx][y1 + hy] = e; // Square stage float f = (a + c + e + e) / 4 + GetRnd() * range; m_heightmap[x1][y1+hy] = f; float g = (a + b + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y1] = g; float h = (b + d + e + e) / 4 + GetRnd() * range; m_heightmap[x2][y1+hy] = h; float i = (c + d + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y2] = i; DiamondSquare(x1, y1, x1+hx, y1+hy, range / 2.0); // Upper left DiamondSquare(x1+hx, y1, x2, y1+hy, range / 2.0); // Upper right DiamondSquare(x1, y1+hy, x1+hx, y2, range / 2.0); // Lower left DiamondSquare(x1+hx, y1+hy, x2, y2, range / 2.0); // Lower right } Parameters: (x1,y1),(x2,y2) - coordinates that define a region on a heightmap (default (0,0)(128,128)). range - basically max. height. (default 32) Help would be greatly appreciated.

    Read the article

  • Sprite animation in openGL - Some frames are being skipped

    - by Sid
    Earlier, I was facing problems on implementing sprite animation in openGL ES. Now its being sorted up. But the problem that i am facing now is that some of my frames are being skipped when a bullet(a circle) strikes on it. What I need : A sprite animation should stop at the last frame without skipping any frame. What I did : Collision Detection function and working properly. PS : Everything is working fine but i want to implement the animation in OPENGL ONLY. Canvas won't work in my case. ------------------------ EDIT----------------------- My sprite sheet. Consider the animation from Left to right and then from top to bottom Here is an image for a better understanding. My spritesheet ... class FragileSquare{ FloatBuffer fVertexBuffer, mTextureBuffer; ByteBuffer mColorBuff; ByteBuffer mIndexBuff; int[] textures = new int[1]; public boolean beingHitFromBall = false; int numberSprites = 20; int columnInt = 4; //number of columns as int float columnFloat = 4.0f; //number of columns as float float rowFloat = 5.0f; int oldIdx; public FragileSquare() { // TODO Auto-generated constructor stub float vertices [] = {-1.0f,1.0f, //byte index 0 1.0f, 1.0f, //byte index 1 //byte index 2 -1.0f, -1.0f, 1.0f,-1.0f}; //byte index 3 float textureCoord[] = { 0.0f,0.0f, 0.25f,0.0f, 0.0f,0.20f, 0.25f,0.20f }; byte indices[] = {0, 1, 2, 1, 2, 3 }; ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4*2 * 4); // 4 vertices, 2 co-ordinates(x,y) 4 for converting in float byteBuffer.order(ByteOrder.nativeOrder()); fVertexBuffer = byteBuffer.asFloatBuffer(); fVertexBuffer.put(vertices); fVertexBuffer.position(0); ByteBuffer byteBuffer2 = ByteBuffer.allocateDirect(textureCoord.length * 4); byteBuffer2.order(ByteOrder.nativeOrder()); mTextureBuffer = byteBuffer2.asFloatBuffer(); mTextureBuffer.put(textureCoord); mTextureBuffer.position(0); } public void draw(GL10 gl){ gl.glFrontFace(GL11.GL_CW); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(1,GL10.GL_FLOAT, 0, fVertexBuffer); gl.glEnable(GL10.GL_TEXTURE_2D); if(MyRender.flag2==1){ /** Collision has taken place*/ int idx = oldIdx==(numberSprites-1) ? (numberSprites-1) : (int)((System.currentTimeMillis()%(200*numberSprites))/200); gl.glMatrixMode(GL10.GL_TEXTURE); gl.glTranslatef((idx%columnInt)/columnFloat, (idx/columnInt)/rowFloat, 0); gl.glMatrixMode(GL10.GL_MODELVIEW); oldIdx = idx; } gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //4 gl.glTexCoordPointer(2, GL10.GL_FLOAT,0, mTextureBuffer); //5 gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); //7 gl.glFrontFace(GL11.GL_CCW); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glMatrixMode(GL10.GL_TEXTURE); gl.glLoadIdentity(); gl.glMatrixMode(GL10.GL_MODELVIEW); } public void loadFragileTexture(GL10 gl, Context context, int resource) { Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resource); gl.glGenTextures(1, textures, 0); gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); }

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • Making a game preloader (Flash) [closed]

    - by Artemix
    Possible Duplicate: How do you create a single/internal pre-loader for a Flash game written using Flex? Hi guys, Im trying to make a preloader in a Flash game. Thing is, I need some advices on this since I never made one, I have the game almost complete, but when, i.e, I upload the game to a website I get a white screen for a few seconds, and then I see the game. Is there a simple way, maybe using an a API or something like that, to make a preloader screen? Im using Flash Builder fyi. Thx!

    Read the article

  • Simultaneous Animation for a GameObject - Unity3D

    - by Fahim Ali Zain
    Its my second week with Unity. I am doing a 2D game and I have a small GameObject which should change its sprite along with following a definite path defined in Animation Curves. I did both of them in separate .anim files since the transform animation had many keyframes, i thought it wont be good to put the '2' sprite keyframe repeatedly along side the transform keyframe. But the problem is, I cant get it both working together at the same time. I dont want any blending because the animation is timed well already. Also, I tried deleting the sprite change animation and tried it under script changing the SpriteRenderer.Sprite property under Update(); but it works only when the Animator component is disabled in the GameObject. Any Solutions ? :)

    Read the article

  • Frame Interpolation issues for skeletal animation

    - by sebby_man
    I'm trying to animate in-between keyframes for skeletal animation but having some issues. Each joint is represented by a quaternion and there is no translation component. When I try to slerp between the orientations at the two key frames, I got a very wacky animation. I know my skinning equation is right because the animation is perfectly fine when the animation is directly on a keyframe rather than in-between two. I'm using glm's built in mix function to do the slerp, so I don't think there are any problems with the actual slerp implementation. There's really one thing left that could be wrong here. I must not be in the correct space to do slerp. Right now the orientations are in joint local space. Do I have to be in world space? In some other space along the way? I have the bind pose matrix and world-space transformation matrix at my disposal if those are needed.

    Read the article

  • Collision detection when pathfinding with pathnodes, UDK

    - by Dave Voyles
    I'm trying to create a class that allows my AIController to path find using pathnodes (NOT NavMeshes). It's doing a swell job of going from point to point in a set order (although I would like for it to be a random patrol at some point), but it gets caught up on collision from time to time. I.E. He'll walk the same set path, and when he runs into the blocks in the middle of the map he continues to rub against them until they finish, and continues on his merry way to the next path node. How can I prevent this from happening, or at least have him move from the wall if he does a trace and detects that it is there? It looks like I need to use MoveToward() instead of MoveTo(), as MoveToward allows the pawn to adjust its course during movement. I'm just not sure of how to use those paramters. Mougli has a decent tutorial on it[/URL], but I can't seem to get it to work correctly with my pathnode array. class PathfindingAIController extends UDKBot; var array Waypoints; var int _PathNode; //declare it at the start so you can use it throughout the script var int CloseEnough; simulated function PostBeginPlay() { local PathNode Current; super.PostBeginPlay(); //add the pathnodes to the array foreach WorldInfo.AllActors(class'Pathnode',Current) { Waypoints.AddItem( Current ); } } simulated function Tick(float DeltaTime) { local int Distance; local Rotator DesiredRotation; super.Tick(DeltaTime); if (Pawn != None) { // Smoothly rotate the pawn towards the focal point DesiredRotation = Rotator(GetFocalPoint() - Pawn.Location); Pawn.FaceRotation(RLerp(Pawn.Rotation, DesiredRotation, 3.125f * DeltaTime, true), DeltaTime); } Distance = VSize2D(Pawn.Location - Waypoints[_PathNode].Location); if (Distance <= CloseEnough) { _PathNode++; } if (_PathNode >= Waypoints.Length) { _PathNode = 0; } GoToState('Pathfinding'); } auto state Pathfinding { Begin: if (Waypoints[_PathNode] != None) // make sure there is a pathnode to move to { MoveTo(Waypoints[_PathNode].Location); //move to it `log("STATE: Pathfinding"); } } DefaultProperties { CloseEnough=400 bIsplayer = True }

    Read the article

  • Pixel Shader Giving Black output

    - by Yashwinder
    I am coding in C# using Windows Forms and the SlimDX API to show the effect of a pixel shader. When I am setting the pixel shader, I am getting a black output screen but if I am not using the pixel shader then I am getting my image rendered on the screen. I have the following C# code using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using System.Runtime.InteropServices; using SlimDX.Direct3D9; using SlimDX; using SlimDX.Windows; using System.Drawing; using System.Threading; namespace WindowsFormsApplication1 { // Vertex structure. [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public float Tu; public float Tv; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } public static VertexFormat Format { get { return VertexFormat.Position | VertexFormat.Texture1; } } } static class Program { public static Device D3DDevice; // Direct3D device. public static VertexBuffer Vertices; // Vertex buffer object used to hold vertices. public static Texture Image; // Texture object to hold the image loaded from a file. public static int time; // Used for rotation caculations. public static float angle; // Angle of rottaion. public static Form1 Window =new Form1(); public static string filepath; static VertexShader vertexShader = null; static ConstantTable constantTable = null; static ImageInformation info; [STAThread] static void Main() { filepath = "C:\\Users\\Public\\Pictures\\Sample Pictures\\Garden.jpg"; info = new ImageInformation(); info = ImageInformation.FromFile(filepath); PresentParameters presentParams = new PresentParameters(); // Below are the required bare mininum, needed to initialize the D3D device. presentParams.BackBufferHeight = info.Height; // BackBufferHeight, set to the Window's height. presentParams.BackBufferWidth = info.Width+200; // BackBufferWidth, set to the Window's width. presentParams.Windowed =true; presentParams.DeviceWindowHandle = Window.panel2 .Handle; // DeviceWindowHandle, set to the Window's handle. // Create the device. D3DDevice = new Device(new Direct3D (), 0, DeviceType.Hardware, Window.Handle, CreateFlags.HardwareVertexProcessing, presentParams); // Create the vertex buffer and fill with the triangle vertices. (Non-indexed) // Remember 3 vetices for a triangle, 2 tris per quad = 6. Vertices = new VertexBuffer(D3DDevice, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.None, Pool.Managed); DataStream stream = Vertices.Lock(0, 0, LockFlags.None); stream.WriteRange(BuildVertexData()); Vertices.Unlock(); // Create the texture. Image = Texture.FromFile(D3DDevice,filepath ); // Turn off culling, so we see the front and back of the triangle D3DDevice.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting D3DDevice.SetRenderState(RenderState.Lighting, false); ShaderBytecode sbcv = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\vertexShader.vs", "vs_main", "vs_1_1", ShaderFlags.None); constantTable = sbcv.ConstantTable; vertexShader = new VertexShader(D3DDevice, sbcv); ShaderBytecode sbc = ShaderBytecode.CompileFromFile("C:\\Users\\yashwinder singh\\Desktop\\pixelShader.txt", "ps_main", "ps_3_0", ShaderFlags.None); PixelShader ps = new PixelShader(D3DDevice, sbc); VertexDeclaration vertexDecl = new VertexDeclaration(D3DDevice, new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.PositionTransformed, 0), new VertexElement(0, 12, DeclarationType.Float2 , DeclarationMethod.Default, DeclarationUsage.TextureCoordinate , 0), VertexElement.VertexDeclarationEnd }); Application.EnableVisualStyles(); MessagePump.Run(Window, () => { // Clear the backbuffer to a black color. D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); // Begin the scene. D3DDevice.BeginScene(); // Setup the world, view and projection matrices. //D3DDevice.VertexShader = vertexShader; //D3DDevice.PixelShader = ps; // Render the vertex buffer. D3DDevice.SetStreamSource(0, Vertices, 0, Vertex.SizeBytes); D3DDevice.VertexFormat = Vertex.Format; // Setup our texture. Using Textures introduces the texture stage states, // which govern how Textures get blended together (in the case of multiple // Textures) and lighting information. D3DDevice.SetTexture(0, Image); // Now drawing 2 triangles, for a quad. D3DDevice.DrawPrimitives(PrimitiveType.TriangleList , 0, 2); // End the scene. D3DDevice.EndScene(); // Present the backbuffer contents to the screen. D3DDevice.Present(); }); if (Image != null) Image.Dispose(); if (Vertices != null) Vertices.Dispose(); if (D3DDevice != null) D3DDevice.Dispose(); } private static Vertex[] BuildVertexData() { Vertex[] vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].Tu = 0.0f; vertexData[0].Tv = 0.0f; vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].Tu = 0.0f; vertexData[1].Tv = 1.0f; vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].Tu = 1.0f; vertexData[2].Tv = 0.0f; vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].Tu = 0.0f; vertexData[3].Tv = 1.0f; vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].Tu = 1.0f; vertexData[4].Tv = 1.0f; vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].Tu = 1.0f; vertexData[5].Tv = 0.0f; return vertexData; } } } And my pixel shader and vertex shader code are as following // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; // Global variables sampler2D Tex0; // Name: Simple Pixel Shader // Type: Pixel shader // Desc: Fetch texture and blend with constant color // PS_OUTPUT ps_main( in PS_INPUT In ) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup Out.Color *= float4(0.9f, 0.8f, 0.0f, 1); //do a simple effect return Out; //return output pixel } // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; // Name: Simple Vertex Shader // Type: Vertex shader // Desc: Vertex transformation and texture coord pass-through // VS_OUTPUT vs_main( in VS_INPUT In ) { VS_OUTPUT Out; //create an output vertex Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; //return output vertex }

    Read the article

  • Is dynamic casting Entities A good design?

    - by Milo
    For my game, Everything inherits from Entity, then other things like Player, PhysicsObject, etc, inherit from Entity. The physics engine sends collision callbacks which has an Entity* to the B that A collided on. Then, lets say A is a Bullet, A tries to cast the entity as a player, if it succeeds, it reduces the player's health. Is this a good design? The problem I have with a message system is that I'd need messages for everything, like: entity.sendMessage(SET_PLAYER_HEALTH,16); So that's why I think casting is cleaner.

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • 2.5D action RPG game

    - by Phorden
    I want to make a 2.5D action RPG game in the next say five years. I need to learn a language first and I have started with C#. I haven't gotten too far into learning it and I would like advice on the best way to approach making a game like this in the long run. Work with XNA studios or stop and learn C++ and UDK? Or maybe there is another good way to approach this. I want to learn programming, so just using a visual editor without learning to code is not the way I want to go. I also don't want to write my game engine from scratch. I'm all ears for advice.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Allocating Entities within an Entity System

    - by miguel.martin
    I'm quite unsure how I should allocate/resemble my entities within my entity system. I have various options, but most of them seem to have cons associated with them. In all cases entities are resembled by an ID (integer), and possibly has a wrapper class associated with it. This wrapper class has methods to add/remove components to/from the entity. Before I mention the options, here is the basic structure of my entity system: Entity An object that describes an object within the game Component Used to store data for the entity System Contains entities with specific components Used to update entities with specific components World Contains entities and systems for the entity system Can create/destroy entites and have systems added/removed from/to it Here are my options, that I have thought of: Option 1: Do not store the Entity wrapper classes, and just store the next ID/deleted IDs. In other words, entities will be returned by value, like so: Entity entity = world.createEntity(); This is much like entityx, except I see some flaws in this design. Cons There can be duplicate entity wrapper classes (as the copy-ctor has to be implemented, and systems need to contain entities) If an Entity is destroyed, the duplicate entity wrapper classes will not have an updated value Option 2: Store the entity wrapper classes within an object pool. i.e. Entities will be return by pointer/reference, like so: Entity& e = world.createEntity(); Cons If there is duplicate entities, then when an entity is destroyed, the same entity object may be re-used to allocate another entity. Option 3: Use raw IDs, and forget about the wrapper entity classes. The downfall to this, I think, is the syntax that will be required for it. I'm thinking about doing thisas it seems the most simple & easy to implement it. I'm quite unsure about it, because of the syntax. i.e. To add a component with this design, it would look like: Entity e = world.createEntity(); world.addComponent<Position>(e, 0, 3); As apposed to this: Entity e = world.createEntity(); e.addComponent<Position>(0, 3); Cons Syntax Duplicate IDs

    Read the article

  • LWJGL Text Rendering

    - by Trixmix
    Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example: Font f = new Font("Times New Roman",Font.BOLD,18); font = new UnicodeFont(f); font.getEffects().add(new ColorEffect(Color.white)); font.addAsciiGlyphs(); try { font.loadGlyphs(); } catch (SlickException e) { e.printStackTrace(); } then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1-3 seconds. so when i want to change a color or font type then i have to wait 1-3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!

    Read the article

  • How to optimise Andengine's PathModifer (with singleton or pool)?

    - by Casla
    I am trying to build a game where the character find and follows a new path when a new destination is issued by the player, kinda like how units in RTS games work. This is done on a TMX map and I am using the A Star path finder utilities in Andengine to do this.David helped me on that: How can I change the path a sprite is following in real time? At the moment, every-time a new path is issued, I have to abandon the existing PathModifer and Path instances, and create new ones, and from what I read so far, creating new objects when you could re-use existing ones are a big waste for mobile applications. This is how I coded it at the moment: private void loadPathFound() { if (mAStarPath != null) { modifierPath = new org.andengine.entity.modifier.PathModifier.Path(mAStarPath.getLength()); /* replace the first node in the path as the player's current position */ modifierPath.to(player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_X]-12, player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_Y]-31); for (int i=1; i<mAStarPath.getLength(); i++) { modifierPath.to(mAStarPath.getX(i)*TILE_WIDTH, mAStarPath.getY(i)*TILE_HEIGHT); /* passing in the duration depended on the length of the path, so that the animation has a constant duration for every step */ player.registerEntityModifier(new PathModifier(modifierPath.getLength()/100, modifierPath, null, mIPathModifierListener)); } } The ideal implementation will be to always have just one object of PathModifer and just reset the destination of the path. But I don't know how you can apply the singleton patther on Andengine's PathModifer, there is no method to reset attribute of the path nor the pathModifer. So without re-write the PathModifer and the Path class, or use reflection, is there any other way to implement singleton PathModifer? Thanks for your help.

    Read the article

  • Animating Tile with Blitting taking up Memory.

    - by Kid
    I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up +140 kb more in memory. What part of the following code could possibly cause the leak: //The variable Rectangle finds where on the 2d array we should clear the pixels //Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite //Tiletype is a variable that holds what kind of tile the next tile in animation is //(from tileSheet) //drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite():void{ tileGround.bitmapData.lock(); if(anmArray[0].tileType > 42){ anmArray[0].tileType = 40; frameCount = 0; } var rect:Rectangle = new Rectangle(anmArray[0].xtile * ts, anmArray[0].ytile * ts, ts, ts); tileGround.bitmapData.fillRect(rect, 0); anmArray[0].tileType = 40 + frameCount; drawTile(anmArray[0].tileType, anmArray[0].xtile, anmArray[0].ytile); frameCount++; tileGround.bitmapData.unlock(); } public function drawTile(spriteType:int, xt:int, yt:int):void{ var tileSprite:Bitmap = getImageFromSheet(spriteType, ts); var rec:Rectangle = new Rectangle(0, 0, ts, ts); var pt:Point = new Point(xt * ts, yt * ts); tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true); } public function getImageFromSheet(spriteType:int, size:int):Bitmap{ var sheetColumns:int = tSheet.width/ts; var col:int = spriteType % sheetColumns; var row:int = Math.floor(spriteType/sheetColumns); var rec:Rectangle = new Rectangle(col * ts, row * ts, size, size); var pt:Point = new Point(0,0); var correctTile:Bitmap = new Bitmap(new BitmapData(size, size, false, 0)); correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true); return correctTile; }

    Read the article

  • Facebook Game Rejected: "Your app icon must not overlap with content in your cover image"

    - by peterwilli
    Sorry if this isnt the right stackexchange site to ask this, it was really hard to determine. My FB game just recently got rejected for 2 reasons. The first I fixed nicely and is irrelevant but the second I just can't see to figure out what they mean and I was hoping someone else got the same issue and did know what they meant. These are the errors: You can ignore the error under "Banners" The web preview of my game looks like this now: All I know is that the rejection has something to do with the cover image, not the icons or the screenshots. Please let me know what to do to get approved. Thanks a lot!

    Read the article

  • What math should all game programmers know?

    - by Tetrad
    Simple enough question: What math should all game programmers have a firm grasp of in order to be successful? I'm not specifically talking about rendering math or anything in the niche areas of game programming, more specifically just things that even game programmers should know about, and if they don't they'll probably find it useful. Note: as there is no one correct answer, this question (and its answers) is a community wiki. Also, if you would like fancy latex math equations, feel free to use http://mathurl.com/.

    Read the article

  • For normal mapping, why can we not simply add the tangent normal to the surface normal?

    - by sebf
    I am looking at implementing bump mapping (which in all implementations I have seen is really normal mapping), and so far all I have read says that to do this, we create a matrix to convert from world-space to tangent-space, in order to transform the lights and eye direction vectors into tangent space, so that the vectors from the normal map may be used directly in place of those passed through from the vertex shader. What I do not understand though, is why we cannot just use the normalised sum of the sampled-normal vector, and the surface-normal? (assuming we already transform and pass through the surface normal for the existing lighting functions) Take the diagram below; the normal is simply the deviation from the 'reference normal' for any given coordinate system, correct? And transforming the surface normal of a mapped surface from world space to tangent space makes it equivalent to the tangent space 'reference normal', no? If so, why do we transform all lighting vectors into tangent space, instead of simply transforming the sampled tangent once in the pixel shader?

    Read the article

  • 3D picking lwjgl

    - by Wirde
    I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.) I posted this at stackoverflow at first but after researching some more in to my problem i found this neat site and tought that you guys might be more qualified to answer this question. This is how the code looks like: if(Mouse.getEventButton() == 1) { if (!Mouse.getEventButtonState()) { Camera.get().generateViewMatrix(); float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio(); float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f)); float displacementRate = (float)Math.tan(Camera.get().getFovy()/2); screenSpaceX *= displacementRate; screenSpaceY *= displacementRate; Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1); Matrix4f tmpView = new Matrix4f(); Camera.get().getViewMatrix().transpose(tmpView); Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar); Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); Ray clickRay = new Ray(rayPosition, rayDirection); Vector tMin = new Vector(), tMax = new Vector(), tempPoint; float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f; Drawable closestDrawableHit = null; for(Drawable d : this.worldModel.getDrawableThings()) { // Calcualte AABB for each object... needs to be moved later... firstVertex = true; for(Surface surface : d.getSurfaces()) { for(Vertex v : surface.getVertices()) { worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x; worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y; worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z; worldPosition = worldPosition.rotate(d.getRotation()); if (firstVertex) { maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z; minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z; firstVertex = false; } else { if (worldPosition.x > maxX) { maxX = worldPosition.x; } if (worldPosition.x < minX) { minX = worldPosition.x; } if (worldPosition.y > maxY) { maxY = worldPosition.y; } if (worldPosition.y < minY) { minY = worldPosition.y; } if (worldPosition.z > maxZ) { maxZ = worldPosition.z; } if (worldPosition.z < minZ) { minZ = worldPosition.z; } } } } // ray/slabs intersection test... // clickRay.getOrigin().x + clickRay.getDirection().x * f = minX // clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f // clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f // -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y; if(largestEnteringValue < temp) { largestEnteringValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z; if(largestEnteringValue < temp) { largestEnteringValue = temp; } smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y; if(smallestExitingValue > temp) { smallestExitingValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z; if(smallestExitingValue < temp) { smallestExitingValue = temp; } if(largestEnteringValue > smallestExitingValue) { //System.out.println("Miss!"); } else { if (largestEnteringValue < closestEnteringValue) { closestEnteringValue = largestEnteringValue; closestDrawableHit = d; } } } if(closestDrawableHit != null) { System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z); this.worldModel.removeDrawableThing(closestDrawableHit); } } } I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all. Edit: Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center (nearer or further away depending on where on the screen im clicking) and always shots to the same position no matter where I direct my camera... My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the gluLookAt method in lwjgl; I have to build it my self and I guess thats where the problem is at)... Edit2: This is how i calculate it currently: private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}}; public Vector getCameraDirectionVector() { Vector actualEye = this.getActualEyePosition(); return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z); } public Vector getActualEyePosition() { return eye.rotate(this.getRotation()); } public void generateViewMatrix() { Vector cameraDirectionVector = getCameraDirectionVector().normalize(); Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize(); Vector up = Vector.cross(side, cameraDirectionVector); viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x; viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y; viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z; /* Vector actualEyePosition = this.getActualEyePosition(); Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize(); Vector xaxis = Vector.cross(upVector, zaxis).normalize(); Vector yaxis = Vector.cross(zaxis, xaxis); viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x; viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y; viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z; viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition); */ viewMatrix = new Matrix4f(); viewMatrix.load(getViewMatrixAsFloatBuffer()); } Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it... I have read alot of threads and documentations about this but i can't seam to wrapp my head around it... Edit3: Okay with the help of Byte56 (thanks alot for the help) i have now concluded that it's not the viewMatrix that is the problem... I still get the same messedup result; anyone that think that they can find the error in my code, i certenly can't, have bean working on this for 3 days now :(

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >