Search Results

Search found 28914 results on 1157 pages for 'cloud development'.

Page 430/1157 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Factors to consider when building an algorithm for gun recoil

    - by Nate Bross
    What would be a good algorithm for calculating the recoil of a shooting guns cross-hairs? What I've got now, is something like this: Define min/max recoil based on weapon size Generate random number of "delta" movement Apply random value to X, Y, or both of cross-hairs (only "up" on the Y axis) Multiply new delta based on time from the previous shot (more recoil for full-auto) What I'm worried about is that this feels rather predicable, what other factors should one take into account when building recoil? While I'd like it to be somewhat predictable, I'd also like to keep players on their toes. I'm thinking about increasing the min/max recoil values by a large amount (relatively) and adding a weighting, so large recoils will be more rare -- it seems like a lot of effort to go into something I felt would be simple. Maybe this is just something that needs to be fine-tuned with additional playtesting, and more playtesters? I think that it's important to note, that the recoil will be a large part of the game, and is a key factor in the game being fun/challenging or not.

    Read the article

  • Physics engine that can handle multiple attractors?

    - by brice
    I'm putting together a game that will be played mostly with three dimensional gravity. By that I mean multiple planets/stars/moons behaving realistically, and path plotting and path prediction in the gravity field. I have looked at a variety of physics engines, such as Bullet, tokamak or Newton, but none of them seem to be suitable, as I'd essentially have to re-write the gravity engine in their framework. Do you know of a physics engine that is capable of dealing with multiple bodies all attracted to one another? I don't need scenegraph management, or rendering, just core physics. (collision detection would be a bonus, as would rigid body dynamics). My background is in physics, so I would be able to write an engine that uses Verlet integration or RK4 (or even Euler integration, if I had to) but I'd much rather adapt an off the shelf solution. [edit]: There are some great resources for physics simulation of n-body problems online, and on stackoverflow

    Read the article

  • How should I replan A*?

    - by Gregory Weir
    I've got a pathfinding boss enemy that seeks the player using the A* algorithm. It's a pretty complex environment, and I'm doing it in Flash, so the search can get a bit slow when it's searching over long distances. If the player was stationary, I could just search once, but at the moment I'm searching every frame. This takes long enough that my framerate is suffering. What's the usual solution to this? Is there a way to "replan" A* without redoing the entire search? Should I just search a little less often (every half-second or second) and accept that there will be a little inaccuracy in the path?

    Read the article

  • Derive a algorithm to match best position

    - by Farooq Arshed
    I have pieces in my game which have stats and cost assigned to them and they can only be placed at a certain location. Lets say I have 50 pieces. e.g. Piece1 = 100 stats, 10 cost, Position A. Piece2 = 120 stats, 5 cost, Position B. Piece3 = 500 stats, 50 cost, Position C. Piece4 = 200 stats, 25 cost, Position A. and so on.. I have a board on which 12 pieces have to be allocated and have to remain inside the board cost. e.g. A board has A,B,C ... J,K,L positions and X Cost assigned to it. I have to figure out a way to place best possible piece in the correct position and should remain within the cost specified by the board. Any help would be appreciated.

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • Quaternion Camera

    - by Alex_Hyzer_Kenoyer
    Can someone help me figure out how to use a Quaternion with the PerspectiveCamera in libGDX or in general? I am trying to rotate my camera around a sphere that is being drawn at (0,0,0). I am not sure how to go about setting up the quaternion correctly, manipulating it, and then applying it to the camera. Edit: Here is what I have tried to do so far. // This is how I set it up Quaternion orientation = new Quaternion(); orientation.setFromAxis(Vector3.Y, 45); // This is how I am trying to update the rotations public void rotateX(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.X, amount); orientation.mul(temp); } public void rotateY(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.Y, amount); orientation.mul(temp); } public void updateCamera() { // This is where I am unsure how to apply the rotations to the camera // I think I should update the view and projection matrices? camera.view.mul(orientation); ... }

    Read the article

  • Vertex buffer acting strange? [on hold]

    - by Ryan Capote
    I'm having a strange problem, and I don't know what could be causing it. My current code is identical to how I've done this before. I'm trying to render a rectangle using VBO and orthographic projection.   My results:     What I expect: 3x3 rectangle in the top left corner   #include <stdio.h> #include <GL\glew.h> #include <GLFW\glfw3.h> #include "lodepng.h"   static const int FALSE = 0; static const int TRUE = 1;   static const char* VERT_SHADER =     "#version 330\n"       "layout(location=0) in vec4 VertexPosition; "     "layout(location=1) in vec2 UV;"     "uniform mat4 uProjectionMatrix;"     /*"out vec2 TexCoords;"*/       "void main(void) {"     "    gl_Position = uProjectionMatrix*VertexPosition;"     /*"    TexCoords = UV;"*/     "}";   static const char* FRAG_SHADER =     "#version 330\n"       /*"uniform sampler2D uDiffuseTexture;"     "uniform vec4 uColor;"     "in vec2 TexCoords;"*/     "out vec4 FragColor;"       "void main(void) {"    /* "    vec4 texel = texture2D(uDiffuseTexture, TexCoords);"     "    if(texel.a <= 0) {"     "         discard;"     "    }"     "    FragColor = texel;"*/     "    FragColor = vec4(1.f);"     "}";   static int g_running; static GLFWwindow *gl_window; static float gl_projectionMatrix[16];   /*     Structures */ typedef struct _Vertex {     float x, y, z, w;     float u, v; } Vertex;   typedef struct _Position {     float x, y; } Position;   typedef struct _Bitmap {     unsigned char *pixels;     unsigned int width, height; } Bitmap;   typedef struct _Texture {     GLuint id;     unsigned int width, height; } Texture;   typedef struct _VertexBuffer {     GLuint bufferObj, vertexArray; } VertexBuffer;   typedef struct _ShaderProgram {     GLuint vertexShader, fragmentShader, program; } ShaderProgram;   /*   http://en.wikipedia.org/wiki/Orthographic_projection */ void createOrthoProjection(float *projection, float width, float height, float far, float near)  {       const float left = 0;     const float right = width;     const float top = 0;     const float bottom = height;          projection[0] = 2.f / (right - left);     projection[1] = 0.f;     projection[2] = 0.f;     projection[3] = -(right+left) / (right-left);     projection[4] = 0.f;     projection[5] = 2.f / (top - bottom);     projection[6] = 0.f;     projection[7] = -(top + bottom) / (top - bottom);     projection[8] = 0.f;     projection[9] = 0.f;     projection[10] = -2.f / (far-near);     projection[11] = (far+near)/(far-near);     projection[12] = 0.f;     projection[13] = 0.f;     projection[14] = 0.f;     projection[15] = 1.f; }   /*     Textures */ void loadBitmap(const char *filename, Bitmap *bitmap, int *success) {     int error = lodepng_decode32_file(&bitmap->pixels, &bitmap->width, &bitmap->height, filename);       if (error != 0) {         printf("Failed to load bitmap. ");         printf(lodepng_error_text(error));         success = FALSE;         return;     } }   void destroyBitmap(Bitmap *bitmap) {     free(bitmap->pixels); }   void createTexture(Texture *texture, const Bitmap *bitmap) {     texture->id = 0;     glGenTextures(1, &texture->id);     glBindTexture(GL_TEXTURE_2D, texture);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0,              GL_RGBA, GL_UNSIGNED_BYTE, bitmap->pixels);       glBindTexture(GL_TEXTURE_2D, 0); }   void destroyTexture(Texture *texture) {     glDeleteTextures(1, &texture->id);     texture->id = 0; }   /*     Vertex Buffer */ void createVertexBuffer(VertexBuffer *vertexBuffer, Vertex *vertices) {     glGenBuffers(1, &vertexBuffer->bufferObj);     glGenVertexArrays(1, &vertexBuffer->vertexArray);     glBindVertexArray(vertexBuffer->vertexArray);       glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj);     glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 6, (const GLvoid*)vertices, GL_STATIC_DRAW);       const unsigned int uvOffset = sizeof(float) * 4;       glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);     glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)uvOffset);       glEnableVertexAttribArray(0);     glEnableVertexAttribArray(1);       glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }   void destroyVertexBuffer(VertexBuffer *vertexBuffer) {     glDeleteBuffers(1, &vertexBuffer->bufferObj);     glDeleteVertexArrays(1, &vertexBuffer->vertexArray); }   void bindVertexBuffer(VertexBuffer *vertexBuffer) {     glBindVertexArray(vertexBuffer->vertexArray);     glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj); }   void drawVertexBufferMode(GLenum mode) {     glDrawArrays(mode, 0, 6); }   void drawVertexBuffer() {     drawVertexBufferMode(GL_TRIANGLES); }   void unbindVertexBuffer() {     glBindVertexArray(0);     glBindBuffer(GL_ARRAY_BUFFER, 0); }   /*     Shaders */ void compileShader(ShaderProgram *shaderProgram, const char *vertexSrc, const char *fragSrc) {     GLenum err;     shaderProgram->vertexShader = glCreateShader(GL_VERTEX_SHADER);     shaderProgram->fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);       if (shaderProgram->vertexShader == 0) {         printf("Failed to create vertex shader.");         return;     }       if (shaderProgram->fragmentShader == 0) {         printf("Failed to create fragment shader.");         return;     }       glShaderSource(shaderProgram->vertexShader, 1, &vertexSrc, NULL);     glCompileShader(shaderProgram->vertexShader);     glGetShaderiv(shaderProgram->vertexShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile vertex shader.");         return;     }       glShaderSource(shaderProgram->fragmentShader, 1, &fragSrc, NULL);     glCompileShader(shaderProgram->fragmentShader);     glGetShaderiv(shaderProgram->fragmentShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile fragment shader.");         return;     }       shaderProgram->program = glCreateProgram();     glAttachShader(shaderProgram->program, shaderProgram->vertexShader);     glAttachShader(shaderProgram->program, shaderProgram->fragmentShader);     glLinkProgram(shaderProgram->program);          glGetProgramiv(shaderProgram->program, GL_LINK_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to link shader.");         return;     } }   void destroyShader(ShaderProgram *shaderProgram) {     glDetachShader(shaderProgram->program, shaderProgram->vertexShader);     glDetachShader(shaderProgram->program, shaderProgram->fragmentShader);       glDeleteShader(shaderProgram->vertexShader);     glDeleteShader(shaderProgram->fragmentShader);       glDeleteProgram(shaderProgram->program); }   GLuint getUniformLocation(const char *name, ShaderProgram *program) {     GLuint result = 0;     result = glGetUniformLocation(program->program, name);       return result; }   void setUniformMatrix(float *matrix, const char *name, ShaderProgram *program) {     GLuint loc = getUniformLocation(name, program);       if (loc == -1) {         printf("Failed to get uniform location in setUniformMatrix.\n");         return;     }       glUniformMatrix4fv(loc, 1, GL_FALSE, matrix); }   /*     General functions */ static int isRunning() {     return g_running && !glfwWindowShouldClose(gl_window); }   static void initializeGLFW(GLFWwindow **window, int width, int height, int *success) {     if (!glfwInit()) {         printf("Failed it inialize GLFW.");         *success = FALSE;        return;     }          glfwWindowHint(GLFW_RESIZABLE, 0);     *window = glfwCreateWindow(width, height, "Alignments", NULL, NULL);          if (!*window) {         printf("Failed to create window.");         glfwTerminate();         *success = FALSE;         return;     }          glfwMakeContextCurrent(*window);       GLenum glewErr = glewInit();     if (glewErr != GLEW_OK) {         printf("Failed to initialize GLEW.");         printf(glewGetErrorString(glewErr));         *success = FALSE;         return;     }       glClearColor(0.f, 0.f, 0.f, 1.f);     glViewport(0, 0, width, height);     *success = TRUE; }   int main(int argc, char **argv) {          int err = FALSE;     initializeGLFW(&gl_window, 480, 320, &err);     glDisable(GL_DEPTH_TEST);     if (err == FALSE) {         return 1;     }          createOrthoProjection(gl_projectionMatrix, 480.f, 320.f, 0.f, 1.f);          g_running = TRUE;          ShaderProgram shader;     compileShader(&shader, VERT_SHADER, FRAG_SHADER);     glUseProgram(shader.program);     setUniformMatrix(&gl_projectionMatrix, "uProjectionMatrix", &shader);       Vertex rectangle[6];     VertexBuffer vbo;     rectangle[0] = (Vertex){0.f, 0.f, 0.f, 1.f, 0.f, 0.f}; // Top left     rectangle[1] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top right     rectangle[2] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[3] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top left     rectangle[4] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[5] = (Vertex){3.f, 3.f, 0.f, 1.f, 1.f, 1.f}; // Bottom right       createVertexBuffer(&vbo, &rectangle);            bindVertexBuffer(&vbo);          while (isRunning()) {         glClear(GL_COLOR_BUFFER_BIT);         glfwPollEvents();                    drawVertexBuffer();                    glfwSwapBuffers(gl_window);     }          unbindVertexBuffer(&vbo);       glUseProgram(0);     destroyShader(&shader);     destroyVertexBuffer(&vbo);     glfwTerminate();     return 0; }

    Read the article

  • How do people get around the Carmack's Reverse patent?

    - by Rei Miyasaka
    Apparently, Creative has a patent on Carmack's Reverse, and they successfully forced Id to modify their techniques for the source drop, as well as to include EAX in Doom 3. But Carmack's Reverse is discussed quite often and apparently it's a good choice for deferred shading, so it's presumably used in a lot of other high-budget productions too. Even though it's unlikely that Creative would go after smaller companies, I'm wondering how the bigger studios get around this problem. Do they just cross their fingers and hope Creative doesn't troll them, or do they just not use Carmack's Reverse at all?

    Read the article

  • Where should I place my reaction code in Per-Pixel Collision Detection?

    - by CJ Cohorst
    I have this collision detection code: public bool PerPixelCollision(Player player, Game1 dog) { Matrix atob = player.Transform * Matrix.Invert(dog.Transform); Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, atob); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, atob); Vector2 iBPos = Vector2.Transform(Vector2.Zero, atob); for(int deltax = 0; deltax < player.playerTexture.Width; deltax++) { Vector2 bpos = iBPos; for (int deltay = 0; deltay < player.playerTexture.Height; deltay++) { int bx = (int)bpos.X; int by = (int)bpos.Y; if (bx >= 0 && bx < dog.dogTexture.Width && by >= 0 && by < dog.dogTexture.Height) { if (player.TextureData[deltax + deltay * player.playerTexture.Width].A > 150 && dog.TextureData[bx + by * dog.Texture.Width].A > 150) { return true; } } bpos += stepY; } iBPos += stepX; } return false; } What I want to know is where to put in the code where something happens. For example, I want to put in player.playerPosition.X -= 200 just as a test, but I don't know where to put it. I tried putting it under the return true and above it, but under it, it said unreachable code, and above it nothing happened. I also tried putting it by bpos += stepY; but that didn't work either. Where do I put the code?

    Read the article

  • Why does Farseer 2.x store temporaries as members and not on the stack? (.NET)

    - by Andrew Russell
    UPDATE: This question refers to Farseer 2.x. The newer 3.x doesn't seem to do this. I'm using Farseer Physics Engine quite extensively at the moment, and I've noticed that it seems to store a lot of temporary value types as members of the class, and not on the stack as one might expect. Here is an example from the Body class: private Vector2 _worldPositionTemp = Vector2.Zero; private Matrix _bodyMatrixTemp = Matrix.Identity; private Matrix _rotationMatrixTemp = Matrix.Identity; private Matrix _translationMatrixTemp = Matrix.Identity; public void GetBodyMatrix(out Matrix bodyMatrix) { Matrix.CreateTranslation(position.X, position.Y, 0, out _translationMatrixTemp); Matrix.CreateRotationZ(rotation, out _rotationMatrixTemp); Matrix.Multiply(ref _rotationMatrixTemp, ref _translationMatrixTemp, out bodyMatrix); } public Vector2 GetWorldPosition(Vector2 localPosition) { GetBodyMatrix(out _bodyMatrixTemp); Vector2.Transform(ref localPosition, ref _bodyMatrixTemp, out _worldPositionTemp); return _worldPositionTemp; } It looks like its a by-hand performance optimisation. But I don't see how this could possibly help performance? (If anything I think it would hurt by making objects much larger).

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Thread safe double buffering

    - by kdavis8
    I am trying to implement a draw map method that will draw the tiled image across the surface of the component. I'm having issue with this code. The double buffering does not seem to be working, because the sprite flickers like crazy; my source code: package myPackage; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; public class GameView extends JFrame implements Runnable { public BufferedImage backbuffer; public Graphics2D g2d; public Image img; Thread gameloop; Scene scene; public GameView() { super("Game View"); setSize(600, 600); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); backbuffer = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_RGB); g2d = backbuffer.createGraphics(); Toolkit tk = Toolkit.getDefaultToolkit(); img = tk.getImage(this.getClass().getResource("cage.png")); scene = new Scene(g2d, this); gameloop = new Thread(this); gameloop.start(); } public static void main(String args[]) { new GameView(); } public void paint(Graphics g) { g.drawImage(backbuffer, 0, 0, this); repaint(); } @Override public void run() { // TODO Auto-generated method stub Thread t = Thread.currentThread(); while (t == gameloop) { scene.getScene("dirtmap"); g2d.drawImage(img, 80, 80,this![enter image description here][1]); } } private void drawScene(String string) { // TODO Auto-generated method stub // g2d.setColor(Color.white); // g2d.fillRect(0, 0, getWidth(), getHeight()); scene.getScene(string); } } package myPackage; import java.awt.Color; import java.awt.Component; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; public class Scene { Graphics g2d; Component c; boolean loaded = false; public Scene(Graphics2D gr, Component co) { g2d = gr; c = co; } public void getScene(String mapName) { Toolkit tk = Toolkit.getDefaultToolkit(); Image tile = tk.getImage(this.getClass().getResource("dirt.png")); // g2d.setColor(Color.red); for (int y = 0; y <= 18; y++) { for (int x = 0; x <= 18; x += 1) { g2d.drawImage(tile, x * 32, y * 32, c); } } loaded = true; } }

    Read the article

  • Approaches to timed puzzle elements

    - by ndg
    I'm working on a side scrolling game that has a number of timed puzzle elements. As a simple example: I have a number of moving platforms that have been setup to transition in a pattern. Ideally I'd like to ensure that as the player first approaches them, they are in an ideal state -- whereby the player can witness the full transition and more experienced players (i.e: speedrunners) can complete the puzzle immediately without having to wait for the current transition to complete. The issue here, in a nutshell, is that because these platforms begin transitioning at the start of the level, it's impossible to correctly calculate when the player is likely to stumble upon them. I've done a fair bit of Googling but haven't managed to turn up any decent resources with regards to solving a problem like this. The obvious solution is to only begin updating the objects when the player (or more likely: the camera) first encounters them. But this becomes difficult when you consider more complicated situations. It seems like potentially the easiest way of handling this is to have an invisible trigger volume that will tell any puzzle elements located inside of it that the player has 'arrived' upon first colliding with the player. But this would mean I'd have to logically group puzzle elements, which could become fairly messy in a hurry. Take, for instance, a puzzle that appears to the right of the screen. It may take the player a number of seconds to reach it. It would look strange if the elements involved were to remain stationary. But by the time the player arrives, it's likely things will be 'out of sync'. I wanted to post here in the hopes that others know of, or have implemented, a decent solution to this problem?

    Read the article

  • How to make a battle system in a mobile indie game more fun and engaging

    - by Matt Beckman
    I'm developing an indie game for mobile platforms, and part of the game involves a PvP battle system (where the target player is passive). My vision is simple: the active player can select a weapon/item, then attack/use, and display the calculated outcome. I have a concept for battle modifiers that affect stats to make it more interesting, but I'm not convinced the vision is complete. I've received some inspiration from the game engine that powers Modern War/Kingdom Age/Crime City, but I want more control to make it more fun. In those games, you don't have the option to select weapons or use items, and the "battling" screen is simply 3D eye candy. Since this will be an indie game, I won't be spending $$$ on a team of professional 3D artists/animators, so my edge needs to be different. How would you make a battle system like this more fun and engaging?

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • Calculating adjacent quads on a quad sphere

    - by Caius Eugene
    I've been experimenting with generating a quad sphere. This sphere subdivides into a quadtree structure. Eventually I'm going to be applying some simplex noise to the vertices of each face to create a terrain like surface. To solve the issue of cracks I want to be able to apply a geomitmap technique of triangle fanning on the edges of each quad, but in order to know the subdivision level of the adjacent quads I need to identify which quads are adjacent to each other. Does anyone know any approaches to computing and storing these adjacent quads for quick lookup? Also It's important that I know which direction they are in so I can easily adjust the correct edge.

    Read the article

  • *DX11, HLSL* - Colour as 4 floats or one UINT

    - by Paul
    With the DX11 pipeline, would it be much quicker for the vertex buffer to pass one single UINT with one byte per channel to the input assembler, as opposed to three floats? Then the vertex shader would convert the four bytes to four floats, which I guess is the required colour format for the pipeline. In this instance, colour accuracy isn't an issue. The vertex buffer would need to be updated many times per frame, so using a single UINT and saving 12 bytes for every vertex could well be worth it: quicker uploads to vram and also less memory used. But the cost is the extra shader work for every vertex to convert each 8 bits of the input UNIT into a float. Anyone have an idea if it might be worth doing? Or, is it possible for the pipeline to be set to just internally use a four-byte colour format? The swap chain buffer has been initialised as DXGI_FORMAT_R8G8B8A8_UNORM, so ultimately that's how the colour will be written. Thanks!

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    Hi. I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D -TheCommieDuck

    Read the article

  • AI Game Programming : Bayesian Networks, how to make efficient?

    - by Mahbubur R Aaman
    We know that AI is one of the most important part of Game Programming. Bayesian networks is one of the core part of AI at Game Programming. Bayesian networks are graphs that compactly represent the relationship between random variables for a given problem. These graphs aid in performing reasoning or decision making in the face of uncertainty. Here me, utilizing the monte carlo method and genetic algorithms. But tooks much time and sometimes crashes due to memory. Is there any way to implement efficiently?

    Read the article

  • Level Design V.S. Modeler

    - by Ecurbed
    From what I understand being a level designer and a character/environment/object/etc Modeler are two different jobs, yet sometimes it feels like a Modeler can also do the job of the level designer. I know this also depends on the scale of the game. For small games maybe they are one and the same, but for bigger games they become two different jobs. I understand a background in some modeling could not hurt when it comes to level design, but the question I have is: Do jobs prefer people who can model for level designing? This way they can kill two birds with one stone and have someone to create the assets and design the level. What is your opinion of the training? Does level design contain skill sets that make them completely different from what a modeler can do, or is this an easy transition for a modeler to become a level designer? Can you be a bad level designer but a good modeler and vice versa?

    Read the article

  • What is the kd tree intersection logic?

    - by bobobobo
    I'm trying to figure out how to implement a KD tree. On page 322 of "Real time collision detection" by Ericson The text section is included below in case Google book preview doesn't let you see it the time you click the link text section Relevant section: The basic idea behind intersecting a ray or directed line segment with a k-d tree is straightforward. The line is intersected against the node's splitting plane, and the t value of intersection is computed. If t is within the interval of the line, 0 <= t <= tmax, the line straddles the plane and both children of the tree are recursively descended. If not, only the side containing the segment origin is recursively visited. So here's what I have: (open image in new tab if you can't see the lettering) The logical tree Here the orange ray is going thru the 3d scene. The x's represent intersection with a plane. From the LEFT, the ray hits: The front face of the scene's enclosing cube, The (1) splitting plane The (2.2) splitting plane The right side of the scene's enclosing cube But here's what would happen, naively following Ericson's basic description above: Test against splitting plane (1). Ray hits splitting plane (1), so left and right children of splitting plane (1) are included in next test. Test against splitting plane (2.1). Ray actually hits that plane, (way off to the right) so both children are included in next level of tests. (This is counter-intuitive - shouldn't only the bottom node be included in subsequent tests) Can some one describe what happens when the orange ray goes through the scene correctly?

    Read the article

  • Bodies do not stay sticked together by joint in retina display

    - by Mike JM
    I'm rehearsing on Box2D revolute joints. Everything's going pretty well except for one thing. For some reason bodies joined together with revolute joints do not stay sticked, they start getting apart from each other from the app start when I run it on retina device or simulator. On non retina device it works just fine, as expected. Here's the screenshot of the non-retina version: And here's the behavior when I run the same app on retina device/simulator: I'm taking content scale factor into account.

    Read the article

  • "Marching cubes" voxel terrain - triplanar texturing with depth?

    - by Dan the Man
    I am currently working on a voxel terrain that uses the marching cubes algorithm for polygonizing the scalar field of voxels. I am using a triplanar texturing shader for texturing. say I have a grass texture set to the Y axis and a dirt texture for both the X and Z axes. Now, when my player digs downwards, it still appears as grass. How would I make it to appear as dirt? I have been thinking about this for a while, and the only thing I can think of to make this effect, would be to mark vertices that have been dug with a certain vertex color. When it has that vertex color, the shader would apply that dirt texture to the vertices marked. Is there a better method?

    Read the article

  • How do i start Game programming in windows phone xna?

    - by Ankit Rathod
    Hello, I am very much interested in Game programming in Xna. However during my college days i did not take Physics or Maths. Does that mean i can't create games in xna? I just know basics of trignometry. Can you all point me to few links where i can learn xna as well as the basic stuff of Maths that is bound to be required in most of the games? Are all game programmers excellent in Maths and Physics ? Thanks in advance :)

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >