Search Results

Search found 12182 results on 488 pages for 'game boy'.

Page 245/488 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Bodies do not stay sticked together by joint in retina display

    - by Mike JM
    I'm rehearsing on Box2D revolute joints. Everything's going pretty well except for one thing. For some reason bodies joined together with revolute joints do not stay sticked, they start getting apart from each other from the app start when I run it on retina device or simulator. On non retina device it works just fine, as expected. Here's the screenshot of the non-retina version: And here's the behavior when I run the same app on retina device/simulator: I'm taking content scale factor into account.

    Read the article

  • Why does Farseer 2.x store temporaries as members and not on the stack? (.NET)

    - by Andrew Russell
    UPDATE: This question refers to Farseer 2.x. The newer 3.x doesn't seem to do this. I'm using Farseer Physics Engine quite extensively at the moment, and I've noticed that it seems to store a lot of temporary value types as members of the class, and not on the stack as one might expect. Here is an example from the Body class: private Vector2 _worldPositionTemp = Vector2.Zero; private Matrix _bodyMatrixTemp = Matrix.Identity; private Matrix _rotationMatrixTemp = Matrix.Identity; private Matrix _translationMatrixTemp = Matrix.Identity; public void GetBodyMatrix(out Matrix bodyMatrix) { Matrix.CreateTranslation(position.X, position.Y, 0, out _translationMatrixTemp); Matrix.CreateRotationZ(rotation, out _rotationMatrixTemp); Matrix.Multiply(ref _rotationMatrixTemp, ref _translationMatrixTemp, out bodyMatrix); } public Vector2 GetWorldPosition(Vector2 localPosition) { GetBodyMatrix(out _bodyMatrixTemp); Vector2.Transform(ref localPosition, ref _bodyMatrixTemp, out _worldPositionTemp); return _worldPositionTemp; } It looks like its a by-hand performance optimisation. But I don't see how this could possibly help performance? (If anything I think it would hurt by making objects much larger).

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Where should I place my reaction code in Per-Pixel Collision Detection?

    - by CJ Cohorst
    I have this collision detection code: public bool PerPixelCollision(Player player, Game1 dog) { Matrix atob = player.Transform * Matrix.Invert(dog.Transform); Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, atob); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, atob); Vector2 iBPos = Vector2.Transform(Vector2.Zero, atob); for(int deltax = 0; deltax < player.playerTexture.Width; deltax++) { Vector2 bpos = iBPos; for (int deltay = 0; deltay < player.playerTexture.Height; deltay++) { int bx = (int)bpos.X; int by = (int)bpos.Y; if (bx >= 0 && bx < dog.dogTexture.Width && by >= 0 && by < dog.dogTexture.Height) { if (player.TextureData[deltax + deltay * player.playerTexture.Width].A > 150 && dog.TextureData[bx + by * dog.Texture.Width].A > 150) { return true; } } bpos += stepY; } iBPos += stepX; } return false; } What I want to know is where to put in the code where something happens. For example, I want to put in player.playerPosition.X -= 200 just as a test, but I don't know where to put it. I tried putting it under the return true and above it, but under it, it said unreachable code, and above it nothing happened. I also tried putting it by bpos += stepY; but that didn't work either. Where do I put the code?

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Parabolic throw with set Height and range (libgdx)

    - by Tauboga
    Currently i'm working on a minigame for android where you have a rotating ball in the center of the display which jumps when touched in the direction of his current angle. I'm simply using a gravity vector and a velocity vector in this way: positionBall = positionBall.add(velocity); velocity = velocity.add(gravity); and velocity.x = (float) Math.cos(angle) * 12; /* 12 to amplify the velocity */ velocity.y = (float) Math.sin(angle) * 15; /* 15 to amplify the velocity */ That works fine. Here comes the problem: I want to make the jump look the same on all possible resolutions. The velocity needs to be scaled in a way that when the ball is thrown straight upwards it will touch the upper display border. When thrown directly left or right the range shall be exactly long enough to touch the left/right display border. Which formula(s) do I need to use and how to implement them correctly? Thanks in advance!

    Read the article

  • "Marching cubes" voxel terrain - triplanar texturing with depth?

    - by Dan the Man
    I am currently working on a voxel terrain that uses the marching cubes algorithm for polygonizing the scalar field of voxels. I am using a triplanar texturing shader for texturing. say I have a grass texture set to the Y axis and a dirt texture for both the X and Z axes. Now, when my player digs downwards, it still appears as grass. How would I make it to appear as dirt? I have been thinking about this for a while, and the only thing I can think of to make this effect, would be to mark vertices that have been dug with a certain vertex color. When it has that vertex color, the shader would apply that dirt texture to the vertices marked. Is there a better method?

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • iOS - pass UIImage to shader as texture

    - by martin pilch
    I am trying to pass UIImage to GLSL shader. The fragment shader is: varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform sampler2D inputImageTexture2; void main() { highp vec4 color = texture2D(inputImageTexture, textureCoordinate); highp vec4 color2 = texture2D(inputImageTexture2, textureCoordinate); gl_FragColor = color * color2; } What I want to do is send images from camera and do multiply blend with texture. When I just send data from camera, everything is fine. So problem should be with sending another texture to shader. I am doing it this way: - (void)setTexture:(UIImage*)image forUniform:(NSString*)uniform { CGSize sizeOfImage = [image size]; CGFloat scaleOfImage = [image scale]; CGSize pixelSizeOfImage = CGSizeMake(scaleOfImage * sizeOfImage.width, scaleOfImage * sizeOfImage.height); //create context GLubyte * spriteData = (GLubyte *)malloc(pixelSizeOfImage.width * pixelSizeOfImage.height * 4 * sizeof(GLubyte)); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, pixelSizeOfImage.width, pixelSizeOfImage.height, 8, pixelSizeOfImage.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); //draw image into context CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, pixelSizeOfImage.width, pixelSizeOfImage.height), image.CGImage); //get uniform of texture GLuint uniformIndex = glGetUniformLocation(__programPointer, [uniform UTF8String]); //generate texture GLuint textureIndex; glGenTextures(1, &textureIndex); glBindTexture(GL_TEXTURE_2D, textureIndex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //create texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixelSizeOfImage.width, pixelSizeOfImage.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textureIndex); //"send" to shader glUniform1i(uniformIndex, 1); free(spriteData); CGContextRelease(spriteContext); } Uniform for texture is fine, glGetUniformLocation function do not returns -1. The texture is PNG file of resolution 2000x2000 pixels. PROBLEM: When the texture is passed to shader, I have got "black screen". Maybe problem are parameters of the CGContext or parameters of the function glTexImage2D Thank you

    Read the article

  • Vertex buffer acting strange? [on hold]

    - by Ryan Capote
    I'm having a strange problem, and I don't know what could be causing it. My current code is identical to how I've done this before. I'm trying to render a rectangle using VBO and orthographic projection.   My results:     What I expect: 3x3 rectangle in the top left corner   #include <stdio.h> #include <GL\glew.h> #include <GLFW\glfw3.h> #include "lodepng.h"   static const int FALSE = 0; static const int TRUE = 1;   static const char* VERT_SHADER =     "#version 330\n"       "layout(location=0) in vec4 VertexPosition; "     "layout(location=1) in vec2 UV;"     "uniform mat4 uProjectionMatrix;"     /*"out vec2 TexCoords;"*/       "void main(void) {"     "    gl_Position = uProjectionMatrix*VertexPosition;"     /*"    TexCoords = UV;"*/     "}";   static const char* FRAG_SHADER =     "#version 330\n"       /*"uniform sampler2D uDiffuseTexture;"     "uniform vec4 uColor;"     "in vec2 TexCoords;"*/     "out vec4 FragColor;"       "void main(void) {"    /* "    vec4 texel = texture2D(uDiffuseTexture, TexCoords);"     "    if(texel.a <= 0) {"     "         discard;"     "    }"     "    FragColor = texel;"*/     "    FragColor = vec4(1.f);"     "}";   static int g_running; static GLFWwindow *gl_window; static float gl_projectionMatrix[16];   /*     Structures */ typedef struct _Vertex {     float x, y, z, w;     float u, v; } Vertex;   typedef struct _Position {     float x, y; } Position;   typedef struct _Bitmap {     unsigned char *pixels;     unsigned int width, height; } Bitmap;   typedef struct _Texture {     GLuint id;     unsigned int width, height; } Texture;   typedef struct _VertexBuffer {     GLuint bufferObj, vertexArray; } VertexBuffer;   typedef struct _ShaderProgram {     GLuint vertexShader, fragmentShader, program; } ShaderProgram;   /*   http://en.wikipedia.org/wiki/Orthographic_projection */ void createOrthoProjection(float *projection, float width, float height, float far, float near)  {       const float left = 0;     const float right = width;     const float top = 0;     const float bottom = height;          projection[0] = 2.f / (right - left);     projection[1] = 0.f;     projection[2] = 0.f;     projection[3] = -(right+left) / (right-left);     projection[4] = 0.f;     projection[5] = 2.f / (top - bottom);     projection[6] = 0.f;     projection[7] = -(top + bottom) / (top - bottom);     projection[8] = 0.f;     projection[9] = 0.f;     projection[10] = -2.f / (far-near);     projection[11] = (far+near)/(far-near);     projection[12] = 0.f;     projection[13] = 0.f;     projection[14] = 0.f;     projection[15] = 1.f; }   /*     Textures */ void loadBitmap(const char *filename, Bitmap *bitmap, int *success) {     int error = lodepng_decode32_file(&bitmap->pixels, &bitmap->width, &bitmap->height, filename);       if (error != 0) {         printf("Failed to load bitmap. ");         printf(lodepng_error_text(error));         success = FALSE;         return;     } }   void destroyBitmap(Bitmap *bitmap) {     free(bitmap->pixels); }   void createTexture(Texture *texture, const Bitmap *bitmap) {     texture->id = 0;     glGenTextures(1, &texture->id);     glBindTexture(GL_TEXTURE_2D, texture);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0,              GL_RGBA, GL_UNSIGNED_BYTE, bitmap->pixels);       glBindTexture(GL_TEXTURE_2D, 0); }   void destroyTexture(Texture *texture) {     glDeleteTextures(1, &texture->id);     texture->id = 0; }   /*     Vertex Buffer */ void createVertexBuffer(VertexBuffer *vertexBuffer, Vertex *vertices) {     glGenBuffers(1, &vertexBuffer->bufferObj);     glGenVertexArrays(1, &vertexBuffer->vertexArray);     glBindVertexArray(vertexBuffer->vertexArray);       glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj);     glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 6, (const GLvoid*)vertices, GL_STATIC_DRAW);       const unsigned int uvOffset = sizeof(float) * 4;       glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);     glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)uvOffset);       glEnableVertexAttribArray(0);     glEnableVertexAttribArray(1);       glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }   void destroyVertexBuffer(VertexBuffer *vertexBuffer) {     glDeleteBuffers(1, &vertexBuffer->bufferObj);     glDeleteVertexArrays(1, &vertexBuffer->vertexArray); }   void bindVertexBuffer(VertexBuffer *vertexBuffer) {     glBindVertexArray(vertexBuffer->vertexArray);     glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj); }   void drawVertexBufferMode(GLenum mode) {     glDrawArrays(mode, 0, 6); }   void drawVertexBuffer() {     drawVertexBufferMode(GL_TRIANGLES); }   void unbindVertexBuffer() {     glBindVertexArray(0);     glBindBuffer(GL_ARRAY_BUFFER, 0); }   /*     Shaders */ void compileShader(ShaderProgram *shaderProgram, const char *vertexSrc, const char *fragSrc) {     GLenum err;     shaderProgram->vertexShader = glCreateShader(GL_VERTEX_SHADER);     shaderProgram->fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);       if (shaderProgram->vertexShader == 0) {         printf("Failed to create vertex shader.");         return;     }       if (shaderProgram->fragmentShader == 0) {         printf("Failed to create fragment shader.");         return;     }       glShaderSource(shaderProgram->vertexShader, 1, &vertexSrc, NULL);     glCompileShader(shaderProgram->vertexShader);     glGetShaderiv(shaderProgram->vertexShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile vertex shader.");         return;     }       glShaderSource(shaderProgram->fragmentShader, 1, &fragSrc, NULL);     glCompileShader(shaderProgram->fragmentShader);     glGetShaderiv(shaderProgram->fragmentShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile fragment shader.");         return;     }       shaderProgram->program = glCreateProgram();     glAttachShader(shaderProgram->program, shaderProgram->vertexShader);     glAttachShader(shaderProgram->program, shaderProgram->fragmentShader);     glLinkProgram(shaderProgram->program);          glGetProgramiv(shaderProgram->program, GL_LINK_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to link shader.");         return;     } }   void destroyShader(ShaderProgram *shaderProgram) {     glDetachShader(shaderProgram->program, shaderProgram->vertexShader);     glDetachShader(shaderProgram->program, shaderProgram->fragmentShader);       glDeleteShader(shaderProgram->vertexShader);     glDeleteShader(shaderProgram->fragmentShader);       glDeleteProgram(shaderProgram->program); }   GLuint getUniformLocation(const char *name, ShaderProgram *program) {     GLuint result = 0;     result = glGetUniformLocation(program->program, name);       return result; }   void setUniformMatrix(float *matrix, const char *name, ShaderProgram *program) {     GLuint loc = getUniformLocation(name, program);       if (loc == -1) {         printf("Failed to get uniform location in setUniformMatrix.\n");         return;     }       glUniformMatrix4fv(loc, 1, GL_FALSE, matrix); }   /*     General functions */ static int isRunning() {     return g_running && !glfwWindowShouldClose(gl_window); }   static void initializeGLFW(GLFWwindow **window, int width, int height, int *success) {     if (!glfwInit()) {         printf("Failed it inialize GLFW.");         *success = FALSE;        return;     }          glfwWindowHint(GLFW_RESIZABLE, 0);     *window = glfwCreateWindow(width, height, "Alignments", NULL, NULL);          if (!*window) {         printf("Failed to create window.");         glfwTerminate();         *success = FALSE;         return;     }          glfwMakeContextCurrent(*window);       GLenum glewErr = glewInit();     if (glewErr != GLEW_OK) {         printf("Failed to initialize GLEW.");         printf(glewGetErrorString(glewErr));         *success = FALSE;         return;     }       glClearColor(0.f, 0.f, 0.f, 1.f);     glViewport(0, 0, width, height);     *success = TRUE; }   int main(int argc, char **argv) {          int err = FALSE;     initializeGLFW(&gl_window, 480, 320, &err);     glDisable(GL_DEPTH_TEST);     if (err == FALSE) {         return 1;     }          createOrthoProjection(gl_projectionMatrix, 480.f, 320.f, 0.f, 1.f);          g_running = TRUE;          ShaderProgram shader;     compileShader(&shader, VERT_SHADER, FRAG_SHADER);     glUseProgram(shader.program);     setUniformMatrix(&gl_projectionMatrix, "uProjectionMatrix", &shader);       Vertex rectangle[6];     VertexBuffer vbo;     rectangle[0] = (Vertex){0.f, 0.f, 0.f, 1.f, 0.f, 0.f}; // Top left     rectangle[1] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top right     rectangle[2] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[3] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top left     rectangle[4] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[5] = (Vertex){3.f, 3.f, 0.f, 1.f, 1.f, 1.f}; // Bottom right       createVertexBuffer(&vbo, &rectangle);            bindVertexBuffer(&vbo);          while (isRunning()) {         glClear(GL_COLOR_BUFFER_BIT);         glfwPollEvents();                    drawVertexBuffer();                    glfwSwapBuffers(gl_window);     }          unbindVertexBuffer(&vbo);       glUseProgram(0);     destroyShader(&shader);     destroyVertexBuffer(&vbo);     glfwTerminate();     return 0; }

    Read the article

  • box2d tween what am I missing

    - by philipp
    I have a Box2D project and I want to tween an kinematic body from position A, to position B. The tween function, got it from this blog: function easeInOut(t , b, c, d ){ if ( ( t /= d / 2 ) < 1){ return c/2 * t * t * t * t + b; } return -c/2 * ( (t -= 2 ) * t * t * t - 2 ) + b; } where t is the current value, b the start, c the end and d the total amount of frames (in my case). I am using the method introduced by this lesson of todd's b2d tutorials to move the body by setting its linear Velocity so here is relevant update code of the sprite: if( moveData.current == moveData.total ){ this._body.SetLinearVelocity( new b2Vec2() ); return; } var t = easeNone( moveData.current, 0, 1, moveData.total ); var step = moveData.length / moveData.total * t; var dir = moveData.direction.Copy(); //this is the line that I think might be corrected dir.Multiply( t * moveData.length * fps /moveData.total ) ; var bodyPosition = this._body.GetWorldCenter(); var idealPosition = bodyPosition.Copy(); idealPosition.Add( dir ); idealPosition.Subtract( bodyPosition.Copy() ); moveData.current++; this._body.SetLinearVelocity( idealPosition ); moveData is an Object that holds the global values of the tween, namely: current frame (int), total frames (int), the length of the total distance to travel (float) the direction vector (targetposition - bodyposition) (b2Vec2) and the start of the tween (bodyposition) (b2Vec2) Goal is to tween the body based on a fixed amount of frames: in moveData.total frames. The value of t is always between 0 and 1 and the only thing that is not working correctly is the resulting distance the body travels. I need to calculate the multiplier for the direction vector. What am I missing to make it work?? Greetings philipp

    Read the article

  • Homemaking a 2d soft body physics engine

    - by Griffin
    hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics engine such as box2d as a starting point?

    Read the article

  • How do I render my own DirectX Stuff to a full screen WPF's DirectX surface?

    - by marc40000
    Basically Danny Varod seems to know as he posted it as an answer to this question: Display a Message Box over a Full Screen DirectX application I think, theoretically this might work, but I have no idea how to actually do it. Since I'm also not allowed to post a comment under his comment nor am I allwoed to ask on meta about how to contact another user, I ask this as a normal question here: How do I render my own DirectX Stuff to a full screen WPF's DirectX surface? For starters, I have no idea how to get the DirectX surface from a WPF window. If I had it, what do I have to take care of that the WPF rendering doesn't screw up my own rending or vice-versa?

    Read the article

  • Create edges in Blender

    - by Mikey
    I've worked with 3DS Max in Uni and am trying to learn Blender. My problem is I know a lot of simple techniques from 3DS max that I'm having trouble translating into Blender. So my question is: Say I have a poly in the middle of a mesh and I want to split it in two. Simply adding an edge between two edges. This would cause a two 5gons either side. It's a simple technique I use every now and then when I want to modify geometry. It's called "Edge connect" in 3DS Max. In Blender the only edge connect method I can find is to create edge loops, not helpful when aiming at low poly iPhone games. Is there an equivalent in blender?

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Is there any difference between storing textures and baked lighting for environment meshes?

    - by Ben Hymers
    I assume that when texturing environments, one or several textures will be used, and the UVs of the environment geometry will likely overlap on these textures, so that e.g. a tiling brick texture can be used by many parts of the environment, rather than UV unwrapping the entire thing, and having several areas of the texture be identical. If my assumption is wrong, please let me know! Now, when thinking about baking lighting, clearly this can't be done the same way - lighting in general will be unique to every face so the environment must be UV unwrapped without overlap, and lighting must be baked onto unique areas of one or several textures, to give each surface its own texture space to store its lighting. My questions are: Have I got this wrong? If so, how? Isn't baking lighting going to use a lot of texture space? Will the geometry need two UV sets, one used for the colour/normal texture and one for the lighting texture? Anything else you'd like to add? :)

    Read the article

  • Collision detection - Smooth wall sliding, no bounce effect

    - by Joey
    I'm working on a basic collision detection system that provides point - OBB collision detection. I have around 200 cubes in my environment and I check (for now) each of them in turn and see if it collides. If it does I return the colliding face's normal, save the old player position and do some trigonometry to return a new player position for my wall sliding. edit I'll define my meaning of wall sliding: If a player walks in a vertical slope and has a slight horizontal rotation to the left or the right and keeps walking forward in the wall the player should slide a little to the right/left while continually walking towards the wall till he left the wall. Thus, sliding along the wall. Everything works fine and with multiple objects as well but I still have one problem I can't seem to figure out: smooth wall sliding. In my current implementation sliding along the walls make my player bounce like a mad man (especially noticable with gravity on and moving forward). I have a velocity/direction vector, a normal vector from the collided plane and an old and new player position. First I negate the normal vector and get my new velocity vector by substracting the inverted normal from my direction vector (which is the vector to slide along the wall) and I add this vector to my new Player position and recalculate the direction vector (in case I have multiple collisions). I know I am missing some step but I can't seem to figure it out. Here is my code for the collision detection (run every frame): Vector direction; Vector newPos(camera.GetOriginX(), camera.GetOriginY(), camera.GetOriginZ()); direction = newPos - oldPos; // Direction vector // Check for collision with new position for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { // Get inverse normal (direction STRAIGHT INTO wall) Vector invNormal = normal.Negative(); Vector wallDir = direction - invNormal; // We know INTO wall, and DIRECTION to wall. Substract these and you got slide WALL direction newPos = oldPos + wallDir; direction = newPos - oldPos; } } Any help would be greatly appreciated! FIX I eventually got things up and running how they should thanks to Krazy, I'll post the updated code listing in case someone else comes upon this problem! for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { Vector invNormal = normal.Negative(); invNormal = invNormal * (direction * normal).Length(); // Change normal to direction's length and normal's axis Vector wallDir = direction - invNormal; newPos = oldPos + wallDir; direction = newPos - oldPos; } }

    Read the article

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

  • Can not remove cube in UDK

    - by user32228
    For some reason, I can't move or remove an 'invisible' cube which is on my map. I searched on Google to find a solution but somehow I still can't remove it. The cube looks like this: http://screencloud.net/v/uNyz In Brush Wireframe: http://screencloud.net/v/3C0c In Wireframe: screencloud.net/v/oGBj As you can see, I want to delete the brown cube. Selecting it and pressing the DEL button won't do anything. So, how do you delete the brown cube? EDIT: Seriously, I wrote this post a few minutes ago and I found the solution. However, I still don't know how to delete the brown cube.

    Read the article

  • How to code a 4x shader/filter which emulates arcade crt display behavior?

    - by Arthur Wulf White
    I want to write a shader/filer probably in adobe Pixel Bender that will do the best job possible in emulating the fill of an oldskul monochromatic arcade CRT screen. Much like this here: http://filthypants.blogspot.com/2012/07/customizing-cgwgs-crt-pixel-shader.html Here are some attributes I know will exist in this filter: It will take in a low res image 160 x 120 and return a medium res image 640 x 480. It will add scanlines It will blur the color channels to create that color bleeding effect It will distort the shape of the image from a perfect rectangle into a rounder shape. The question is, could you please provide any other attributes that are beneficial to emulating an arcade CRT feel and links and resources on coding these effects. Thanks

    Read the article

  • Trade offs of linking versus skinning geometry

    - by Jeff
    What are the trade offs between inherent in linking geometry to a node versus using skinned geometry? Specifically: What capabilities do you gain / lose from using each method? What are the performance impacts of doing one over the other? What are the specific situations where you would want to do one over the other? In addition, do the answers to these questions tend to be engine specific? If so, how much?

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • Rotating object along bezier curve: not rotating enough?

    - by Paul
    I tried to follow the instructions from the threads on the forum (Cocos2d rotating sprite while moving with CCBezierBy) with Unity, in order to rotate my object as it moves along a bezier curve. But it does not rotate enough, the angle is too low, it goes up to 6 instead of 90 for example, as you can see on this image (the y eulerAngle is at 6, I would expect it to be around 90 with this curve) : Would you know why it does this? And how to make the rotation toward the next point? Here is the code (in c# with Unity) : (I am comparing x and z to get the angle, and adding the angle to eulerAngles.y so that it rotates around the y axis) void Update () { if ( Input.GetKey("d") ) start = true; if ( start ){ myTime = Time.time; start = false; } float theTime = (Time.time - myTime) *0.5f; if ( theTime < 1 ) { car.position = Spline.Interp( myArray, theTime );//creates the bezier curve counterBezier += Time.deltaTime; //compare 2 positions after 0.1f if ( counterBezier > 0.1f ){ counterBezier = 0; cbDone = false; newpos = car.position; float angle = Mathf.Atan2(newpos.z - oldpos.z, newpos.x - oldpos.x); angle += car.eulerAngles.y; car.eulerAngles = new Vector3(0,angle,0); } else if ( counterBezier > 0 && !cbDone ){ oldpos = car.position; cbDone = true; } Thanks

    Read the article

  • How to raycast select a scaled OBB?

    - by user3254944
    I have the OBB picking code to select an OBB with code inspired from Real time Rendering 3 and opengl-tutorial.org. I can successfully select objects that have been moved or rotated. However, I cant correctly select an object that has been scaled. The bounding box scales right, but the I can only select the object in a thin strip on its center. How do I fix the checkForHits() function to allow it to read the scaling that I passed to it in the raycast matrix? void GLWidget::selectObjRaycast() { glm::vec2 mouse = (glm::vec2(mousePos.x(), mousePos.y()) / glm::vec2(this->width(), this->height())) * 2.0f - 1.0f; mouse.y *= -1; glm::mat4 toWorld = glm::inverse(ProjectionM * ViewM); glm::vec4 from = toWorld * glm::vec4(mouse, -1.0f, 1.0f); glm::vec4 to = toWorld * glm::vec4(mouse, 1.0f, 1.0f); from /= from.w; to /= to.w; fromAABB = glm::vec3(from); toAABB = glm::normalize(glm::vec3(to - from)); checkForHits(); } void GLWidget::checkForHits() { for (int i = 0; i < myWin.myEtc->allObj.size(); ++i) //check for hits on each obj's bb { bool miss = 0; float tMin = 0.0f; float tMax = 100000.0f; glm::vec3 bbPos(myWin.myEtc->allObj[i]->raycastM[3].x, myWin.myEtc->allObj[i]->raycastM[3].y, myWin.myEtc->allObj[i]->raycastM[3].z); glm::vec3 delta = bbPos - fromAABB; for (int j = 0; j < 3; ++j) { glm::vec3 axis(myWin.myEtc->allObj[i]->raycastM[j].x, myWin.myEtc->allObj[i]->raycastM[j].y, myWin.myEtc->allObj[i]->raycastM[j].z); float e = glm::dot(axis, delta); float f = glm::dot(toAABB, axis); if (fabs(f) > 0.001f) { float t1 = (e + myWin.myEtc->allObj[i]->bbMin[j]) / f; float t2 = (e + myWin.myEtc->allObj[i]->bbMax[j]) / f; if (t1 > t2) { float w = t1; t1 = t2; t2 = w; } if (t2 < tMax) tMax = t2; if (t1 > tMin) tMin = t1; if (tMax < tMin) miss = 1; } else { if (-e + myWin.myEtc->allObj[i]->bbMin[j] > 0.0f || -e + myWin.myEtc->allObj[i]->bbMax[j] < 0.0f) miss = 1; } } if (miss == 0) { intersection_distance = tMin; myWin.myEtc->sel.push_back(myWin.myEtc->allObj[i]); myWin.myEtc->allObj[i]->highlight = myWin.myGLHelp->highlight; break; } } } void Object::render(glm::mat4 PV) { scaleM = glm::scale(glm::mat4(), s->val_3); r_quat = glm::quat(glm::radians(r->val_3)); rotationM = glm::toMat4(r_quat); translationM = glm::translate(glm::mat4(), t->val_3); transLocal1M = glm::translate(glm::mat4(), -rsPivot->val_3); transLocal2M = glm::translate(glm::mat4(), rsPivot->val_3); raycastM = translationM * transLocal2M * rotationM * scaleM * transLocal1M; // MVP = PV * translationM * transLocal2M * rotationM * scaleM * transLocal1M; }

    Read the article

  • What to do with input during movement?

    - by user1895420
    In a concept I'm working on, the player can move from one position in a grid to the next. Once movement starts it can't be changed and takes a predetermined amount of time to finish (about a quarter of a second). Even though their movement can't be altered, the player can still press keys (perhaps in anticipation of their next move). What do I do with this input? Possibilities i've thought of: Ignore all input during movement. Log all input and loop through them one by one once movement finishes. Log the first or last input and move when possible. I'm not really sure which is the most appropriate or most natural. Hence my question: What do I do with player-input during movement?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >