Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 543/1016 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Voronoi regions of a (convex) polygon.

    - by Xavura
    I'm looking to add circle-polygon collisions to my Separating Axis Theorem collision detection. The metanet software tutorial (http://www.metanetsoftware.com/technique/tutorialA.html#section3) on SAT, which I discovered in the answer to a question I found when searching, talks about voronoi regions. I'm having trouble finding material on how I would calculate these regions for an arbitrary convex polygon and aleo how I would determine if a point is in one + which. The tutorial does contain source code but it's a .fla and I don't have Flash unfortunately.

    Read the article

  • How do you ensure consistent experience across multiple graphics cards (or even driver versions)?

    - by Grigory Javadyan
    So I was writing a simple 2D game with OpenGL and SDL and had this problem when there was awful tearing when running in windowed mode (even though I explicitly asked SDL_SetVideoMode to use double buffering). Didn't worry about it all too much because most of the time the game grabs the entire screen, windowed mode is just for debugging. Anyway, yesterday I updated my nVidia drivers and tearing disappeared, the game runs smooth and looks nice in windowed mode too. I can see how the problem may be in the graphics driver, but this leads to a question. Obviously, professional game developers have to deal with a lot of different hardware/software configurations. What are the techniques they use to make sure the game looks the roughly the same on different graphics cards or even the same model of graphics card, but with different driver versions?

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • how to decide face side of sprite

    - by user22135
    my first question here :] i am just starting game-dev with slick2D and marte engine and my question is when i move my sprite left and right i am doing walk animation but when the key is released how can i decide in which side the sprite face to set ? here's my Player.java http://pastebin.com/WjQ09Fij am i doing things right ? here's netbeans project without libs http://uppit.com/84vdufs35aas/SSheet.7z [< 45 KB] please help thanks in advance

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • How does your team handle support requests

    - by Skeep
    Hi All, I have just taken over as manager at a company and at the moment they are very rigid in how they approach development. Everyone gets a list of what they are doing each week. My question is how does your company balance support with development and if an important support request comes in how is this processed without disturbing the flow of the developers? Lastly, do you use an software log support requests and development tasks. Thanks

    Read the article

  • Resume Button error

    - by user3178359
    i have two class. if i press button pause it can show button resume, retry,menu and the game time is paused. but when i press the resume the game time still paused. help me plase how to continue the game time ?? code for button pause : using UnityEngine; using System.Collections; public class pause : MonoBehaviour { public GUITexture showMenu; public GUITexture btnResume; public bool gamePaused = false; void OnMouseDown() { gamePaused = true; Time.timeScale = 0; showMenu.pixelInset = new Rect(220, 200, showMenu.pixelInset.width, showMenu.pixelInset.height); btnResume.pixelInset = new Rect(300, 300, btnResume.pixelInset.width, btnResume.pixelInset.height); code for button resume : using UnityEngine; using System.Collections; public class btResume : pause { //public GUITexture shoe; void onMouseDown() { base.gamePaused = false; Time.timeScale = 1; btnResume.pixelInset = new Rect(300, -300, btnResume.pixelInset.width, btnResume.pixelInset.height); showMenu.pixelInset = new Rect(220, -200, showMenu.pixelInset.width, showMenu.pixelInset.height); } }

    Read the article

  • loading a heightmap as texture in shader

    - by wtherapy
    I have a height map of 256x256, containing, foreach cell, not only height as a normal float value ( not 0-1 ) and also 2 gradient values ( for X and Y ), also as normal float values ( not 0-1 ). I have uploaded the texture via normal texture loading: glEnable( GL_TEXTURE_2D ); glGenTextures( 1, &m_uglID ); DEBUG_OUTPUT("Err %x\n", glGetError()); glBindTexture( GL_TEXTURE_2D , m_uglID ); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB32F, unW + 1, unH + 1, 0, GL_RGB, GL_FLOAT, pvBytes ); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_LINEAR); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_LINEAR); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DEBUG_OUTPUT("Err %x\n", glGetError()); as a parenthesis, the debug output is: Err 500 Err 0 Err 0 Err 0 Err 500 Err 500 Err 0 Err 0 pvBytes is a 256x256 array of typedef struct _tGradientHeightCell { float v; float px; float py; } TGradientHeightCell, *LPTGradientHeightCell; then, m_ugl_HeightMapTexture = glGetUniformLocation(m_uglProgram, "TexHeightMap"); I load it via: glEnable(GL_TEXTURE_2D ); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D , pTexture->GetID()); glUniform1i(m_ugl_HeightMapTexture, 0); in shader, I just access it: uniform sampler2D TexHeightMap; vec4 GetVertCellParameters( uint i, uint j ) { return texture( TexHeightMap, vec2( i, j ) ); } vec4 vH00 = GetVertCellParameters( i, j ); My problem is that, when passing negative values in one of the values in TGradientHeightCell ( v, px, py ), the texture is corrupted. I need the values to be passed exact as I have them in memory. Any help appreciated.

    Read the article

  • Can GJK be used with the same "direction finding method" every time?

    - by the_Seppi
    In my deliberations on GJK (after watching http://mollyrocket.com/849) I came up with the idea that it ins not neccessary to use different methods for getting the new direction in the doSimplex function. E.g. if the point A is closest to the origin, the video author uses the negative position vector AO as the direction in which the next point is searched. If an edge (with A as an endpoint) is closest, he creates a normal vector to this edge, lying in the plane the edge and AO form. If a face is the feature closest to the origin, he uses even another method (which I can't recite from memory right now) However, while thinking about the implementation of GJK in my current came, I noticed that the negative direction vector of the newest simplex point would always make a good direction vector. Of course, the next vertex found by the support function could form a simplex that less likely encases the origin, but I assume it would still work. Since I'm currently experiencing problems with my (yet unfinished) implementation, I wanted to ask whether this method of forming the direction vector is usable or not.

    Read the article

  • Isometric Collision Detection

    - by Sleepy Rhino
    I am having some issues with trying to detect collision of two isometric tile. I have tried plotting the lines between each point on the tile and then checking for line intercepts however that didn't work (probably due to incorrect formula) After looking into this for awhile today I believe I am thinking to much into it and there must be a easier way. I am not looking for code just some advise on the best way to achieve detection of overlap

    Read the article

  • How does having the Debugger change the game execution on an XBOX 360?

    - by Sebastian Gray
    So I thought my issue was relating to the difference between a Debug and a Release build as per this question: What's the difference between a "Release" Xbox 360 build and a "Debug" one? but I've since found that if I go ahead and build a Creators Club version of the game using a Debug build and deploy to the XBOX, I get the same experience I had with the Release version of my game. However if I run the game from Visual Studio using F5 and having set the XBOX as the default platform, then the game runs as expected. If I change from Debug to Release and run with CTRL+F5 then the game also works as expected. How would running the game with the debugger attached change the results I am getting in game? Is there any way that I can use the same approach or change the default compilation of the game so that I can use this approach to release my game?

    Read the article

  • Handling hitboxes

    - by TheBroodian
    So I have an issue that I'm laughing at myself about, because it really seems like it should be something that I should be able to figure out pretty quickly. I am designing a 2D action platformer; I have a playable character, and a dummy 'punching bag' character for testing purposes that I've created. I've just gotten enough of both of them done that I can start prototyping and testing them in runtime. Then I realized- neither of them have references of each other (intentionally so), so how do I check for hitboxes stored within my playable character from my dummy character? Long story short, how do I make my dummy know when he's been punched by my hero?

    Read the article

  • How would I use JBox2d in Java?

    - by BluFire
    So I did some research and a found Box2d. I then proceeded to download it and the testbed. Now that i have it, I don't know how to properly use it. I'm looking for a clear simple answer on how to use the engine. The things I did was that I put it into a lib folder and referenced the JBox2D jar file. After that i got stuck. How can i use this to program games for android? I'm very confused since Box2d was intended for C++.

    Read the article

  • What is causing this behavior with the movement of Pong Ball in 2D? [closed]

    - by thegermanpole
    //edit after running it through the debugger it turned out i had the display function set to x,x...TIL how to use a debugger I've been trying to teach myself C++ SDL with the lazyfoo tutorial and seem to have run into a roadblock. The code below is the movement function of my Dot class, which controls the ball. The ball seems to ignore yvel and moves with xvel to the bottom right. The code should be pretty readable, the rest of the relevant facts are: All variables are names Constants are in caps dotrad is the radius of my dot yvel and xvel are set to 5 in the constructor The dot is created at x and y equal to 100 When I comment out the x movement block it doesn't move, but if i comment out the y movement block, it keeps on going down to the right. void Dot::move() { if(((y+yvel+dotrad) <= SCREEN_HEIGHT) && (0 <= (y-dotrad+yvel))) { y+=yvel; } else { yvel = -1*yvel; } if(((x+xvel+dotrad) <= SCREEN_WIDTH) && (0 <= (x-dotrad+xvel))) { x +=xvel; } else { xvel = -1*xvel; } }

    Read the article

  • Achieve anisotropic filtering

    - by fedab
    I want to set anisotropic filtering to my scene. I use SharpDX (DirectX 11) and C#. How do i set up anisotropic filtering in my shader? Currently i try that in the shader: Texture2D tex; sampler textureSampler = sampler_state { Texture = (tex); MipFilter = Anisotropic; MagFilter = Anisotropic; MinFilter = Anisotropic; MaxAnisotropy = 16; }; float4 PShader(float4 position : SV_POSITION, float4 color:COLOR, float2 tex0 : TEXCOORD0) : SV_TARGET { float4 textureColor; textureColor = tex.Sample(textureSampler, tex0) * color; return textureColor; } I get my object, textured, but it is not filtered anisotropic. I can write everything in the Parameters, even invalid things and i don't get any errors. The result is the same, objects without applied anisotropic filtering. Do i have to set that in the shader? Can i do that also with SamplerState? I tested that but i didn't get a result too. Some steps what i have to set would be helpful.

    Read the article

  • Can a high FPS negatively affect how a program runs?

    - by rphello101
    Yeah I know this is a broad question and will get down rated, I'm just hoping for some answer before it gets closed. Anyway, I'm using Slick 2D/Java to play around with graphics. I'm having some trouble with trying to move an image. The weird thing is, the code works just fine on my laptop, but the image sporadically moves to (0,0) and stops on my desktop. The only difference between the two is that it says the FPS is about 500 on my laptop and 6600 on my desktop. Can that affect it or does someone have any ideas for what to check on?

    Read the article

  • When attaching AI to a vehicle should I define all steps or try Line of Sight?

    - by ThorDivDev
    This problem is related to an intersection simulation I am building for university. I will try to make it as general as possible. I am trying to assign AI to a vehicle using the JMonkeyEngine platform. AIGama_JmonkeyEngine explains that if you wish to create a car that follows a path you can define the path in steps. If there was no physics attached whatsoever then all you need to do is define the x,y,z values of where you want the object to appear in all subsequent steps. I am attaching the vehicleControl that implements jBullet. In this case the author mentions that I would need to define the steering and accelerating behaviors at each step. I was trying to use ghost controls that represented waypoints and when on colliding the car would decide what to do next like stopping at a red light. This didn't work so well. Car doesn't face right. public void update(float tpf) { Vector3f currentPos = aiVehicle.getPhysicsLocation(); Vector3f baseforwardVector = currentPos.clone(); Vector3f forwardVector; Vector3f subsVector; if (currentState == ObjectState.Running) { aiVehicle.accelerate(-800); } else if (currentState == ObjectState.Seeking) { baseforwardVector = baseforwardVector.normalize(); forwardVector = aiVehicle.getForwardVector(baseforwardVector); subsVector = pointToSeek.subtract(currentPos.clone()); System.out.printf("baseforwardVector: %f, %f, %f\n", baseforwardVector.x, baseforwardVector.y, baseforwardVector.z); System.out.printf("subsVector: %f, %f, %f\n", subsVector.x, subsVector.y, subsVector.z); System.out.printf("ForwardVector: %f, %f, %f\n", forwardVector.x, forwardVector.y, forwardVector.z); if (pointToSeek != null && pointToSeek.x + 3 >= currentPos.x && pointToSeek.x - 3 <= currentPos.x) { aiVehicle.steer(0.0f); aiVehicle.accelerate(-40); } else if (pointToSeek != null && pointToSeek.x > currentPos.x) { aiVehicle.steer(-0.5f); aiVehicle.accelerate(-40); } else if (pointToSeek != null && pointToSeek.x < currentPos.x) { aiVehicle.steer(0.5f); aiVehicle.accelerate(-40); } } else if (currentState == ObjectState.Stopped) { aiVehicle.accelerate(0); aiVehicle.brake(40); } }

    Read the article

  • Observer Pattern Implementation

    - by user17028
    To teach myself basic game programming, I am going to program a clone of Pong. I will use the Observer design pattern, with an interface between the input and the game engine. However, I'm not sure what the interface should do. One idea I had was for the input interface to tell the game engine that (e.g.) the screen was clicked, then to let the game engine decide what to do with that information (shoot a bullet, for example). Another idea I had was for the input interface, having caught the mouse click, to tell the game engine to shoot a bullet. Which method would be better for me to use?

    Read the article

  • How to populate a form list with buttons using javascript

    - by StealingMana
    I made a script that, when you press one button(accessories) the selection(mylist) populates with one array(accessoryData), and when you hit the other button(weapons) the other array(weaponData) populates the selection. However, in the current state of the code the second button press is not re-populating the selection. What is wrong here? Also if there is a more efficient way to do this, that might be helpful. Full code function runList(form, test) { var html = ""; var x; dataType(test); while (x < dataType.length) { html += "<option>" + dataType[x]; x++; } document.getElementById("mylist").innerHTML = html; }

    Read the article

  • Movement on the X an Z axis are combined?

    - by Magicaxis
    This is probably a stupid question, but I'm trying to simply move a 3D object up, down, left, and right (Not forward or backward). The Y axis works fine, but when I increment the object's X position, the object moves BOTH right and backwards! when I decrement X, left and forwards! setPosition(getPosition().X + 2/*times deltatime*/, getPosition().Y, getPosition().Z); I was astonished that XNA doesnt have its own setPosition function, so I made a parent class for all objects with a setPosition and Draw function. Setposition simply edits a variable "mPosition" and passes it to the common draw function: // Copy any parent transforms. Matrix[] transforms = new Matrix[block.Bones.Count]; block.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in block.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(mOrientation.Y)) * Matrix.CreateTranslation(mPosition); effect.View = game1.getView(); effect.Projection = game1.getProjection(); } // Draw the mesh, using the effects set above. mesh.Draw(); } I tried to work it out by attempting to increment and decrement the Z axis, but nothing happens?! So using the X axis changes the objects x and z axis', but changing the Z does nothing. Great. So how do I seperate the X and Z axis movement?

    Read the article

  • What is the minimum of shader I need to use to run basic calculation on GPU?

    - by Jinxi
    I read, that the Hull Shader, Domain Shader, Geometry Shader and Pixel Shader can be used optional. So, is the Vertex Shader optional too? If no: What does a basic Vertex Shader look like? Just like a simple pass through? Is the Vertex Shader necessary to tell what kind of datastructure (Van Stripes or Meshes) are used? What can I do, with just the vertex shader? Are the fixed functions working without any help of programming a programmable stage?

    Read the article

  • Scalability of multi-threading in game server

    - by Taylor Hill
    What is a reasonable number of threads for a simple 2D mmo in Java? Is it reasonable to have two threads per connection, one for the input stream and one for the output stream? The reason I ask is because I use a blocking method on the input stream, and a workaround seems unnecessarily complex if I were to try to get around it without adding threads. This is mostly for my own edification; I don't expect to have 5 million people playing it ever, or even 5, but I'm wondering what a good scalable solution is, and if this is reasonable for a small server (<30 connections).

    Read the article

  • Checking validation of entries in a Sudoku game written in Java

    - by Mico0
    I'm building a simple Sudoku game in Java which is based on a matrix (an array[9][9]) and I need to validate my board state according to these rules: all rows have 1-9 digits all columns have 1-9 digits. each 3x3 grid has 1-9 digits. This function should be efficient as possible for example if first case is not valid I believe there's no need to check other cases and so on (correct me if I'm wrong). When I tried doing this I had a conflict. Should I do one large for loop and inside check columns and row (in two other loops) or should I do each test separately and verify every case by it's own? (Please don't suggest too advanced solutions with other class/object helpers.) This is what I thought about: Main validating function (which I want pretty clean): public boolean testBoard() { boolean isBoardValid = false; if (validRows()) { if (validColumns()) { if (validCube()) { isBoardValid = true; } } } return isBoardValid; } Different methods to do the specific test such as: private boolean validRows() { int rowsDigitsCount = 0; for (int num = 1; num <= 9; num++) { boolean foundDigit = false; for (int row = 0; (row < board.length) && (!foundDigit); row++) { for (int col = 0; col < board[row].length; col++) { if (board[row][col] == num) { rowsDigitsCount++; foundDigit = true; break; } } } } return rowsDigitsCount == 9 ? true : false; } I don't know if I should keep doing tests separately because it looks like I'm duplicating my code.

    Read the article

  • XNA: draw a sprite in 3d, is that possible?

    - by Heisenbug
    since now I always used sprited to draw in 2D: spriteBatch.Draw(myTexture, rectangle, color); (I suppose the texture is binded internally to 2 triangles and then scaled.) Now, I'm porting my game in 3D and I have to draw several planes (walls, floor, roof,..). Do I need to manually binding a texture to a geometry (for example using VertexPositionColorTexture with VertexBuffer and IndexBuffer), or is there any simpler way to do that? I'm looking for something like spriteBatch.Draw with the rectangle clip specified in 3d space: spriteBatch.Draw(myTexture, rectangleIn3D, color);

    Read the article

  • Game planning and software design? I feel that UML is not convenient

    - by user1542
    In my university, they always emphasize and hype about UML design and stuff, in which I feel it is not going to work well with game structure design. Now, I just want a professional advice on how should I begin my game designing? The story is I have some skill in programming and have done many minor game such as getting some 2D platformer working to some extend. The problems that I find about my program is the poor quality design. After coding for a while, things start to break down due to poor planning (When I add new feature, it tends to make me have to recode the whole program). However, to plan everything out without a single design flaw is a bit too ideal. Therefore, any advice to how should I plan my game? How should I put it into visible pictures, so that me and my friends are able to overview the designs? I planned to start coding a game with my friend. This is going to be my first teamwork, so any professional advices would be a pleasure. Is there any other alternatives than UML? Another question is how does "prototyping" normally looks like?

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >