Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 549/962 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • Bridge made out of blocks at an angle

    - by Pozzuh
    I'm having a bit of trouble with the math behind my project. I want the player to be able to select 2 points (vectors). With these 2 points a floor should be created. When these points are parallel to the x-axis it's easy, just calculate the amount of blocks needed by a simple division, loop through that amount (in x and y) and keep increasing the coordinate by the size of that block. The trouble starts when the 2 vectors aren't parallel to an axis, for example at an angle of 45 degrees. How do I handle the math behind this? If I wasn't completely clear, I made this awesome drawing in paint to demonstrate what I want to achieve. The 2 red dots would be the player selected locations. (The blocks indeed aren't square.) http://i.imgur.com/pzhFMEs.png.

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • Generating tileable terrain using Perlin Noise [duplicate]

    - by terrorcell
    This question already has an answer here: How do you generate tileable Perlin noise? 9 answers I'm having trouble figuring out the solution to this particular algorithm. I'm using the Perlin Noise implementation from: https://code.google.com/p/mikeralib/source/browse/trunk/Mikera/src/main/java/mikera/math/PerlinNoise.java Here's what I have so far: for (Chunk chunk : chunks) { PerlinNoise noise = new PerlinNoise(); for (int y = 0; y < CHUNK_SIZE_HEIGHT; ++y) { for (int x = 0; x < CHUNK_SIZE_WIDTH; ++x) { int index = get1DIndex(y, CHUNK_SIZE_WIDTH, x); float val = 0; for (int i = 2; i <= 32; i *= i) { double n = noise.tileableNoise2(i * x / (float)CHUNK_SIZE_WIDTH, i * y / (float)CHUNK_SIZE_HEIGHT, CHUNK_SIZE_WIDTH, CHUNK_SIZE_HEIGHT); val += n / i; } // chunk tile at [index] gets set to the colour 'val' } } } Which produces something like this: Each chunk is made up of CHUNK_SIZE number of tiles, and each tile has a TILE_SIZE_WIDTH/HEIGHT. I think it has something to do with the inner-most for loop and the x/y co-ords given to the noise function, but I could be wrong. Solved: PerlinNoise noise = new PerlinNoise(); for (Chunk chunk : chunks) { for (int y = 0; y < CHUNK_SIZE_HEIGHT; ++y) { for (int x = 0; x < CHUNK_SIZE_WIDTH; ++x) { int index = get1DIndex(y, CHUNK_SIZE_WIDTH, x); float val = 0; float xx = x * TILE_SIZE_WIDTH + chunk.x; float yy = y * TILE_SIZE_HEIGHT + chunk.h; int w = CHUNK_SIZE_WIDTH * TILE_SIZE_WIDTH; int h = CHUNK_SIZE_HEIGHT * TILE_SIZE_HEIGHT; for (int i = 2; i <= 32; i *= i) { double n = noise.tileableNoise2(i * xx / (float)w, i * yy / (float)h, w, h); val += n / i; } // chunk tile at [index] gets set to the colour 'val' } } }

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Random Position between ranges.

    - by blakey87
    Does anyone have a good algorithm for generating a random y position for spawning a block, which takes into account a minimum and maximum height, allowing player to to jump on the block. Blocks will continually be spawned, so the player must always be able to jump onto the next block, bearing in mind the minimum position which would be the ground, and the maximum which would the players jump height bearing in mind the ceiling

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • lwjgl custom icon

    - by melchor629
    I have a little problem with the icon in lwjgl, it doesn't work. I google about it, but i haven't found anything that works for me yet. This is my code for now: PNGDecoder imageDecoder = new PNGDecoder(new FileInputStream("res/images/Icon.png")); ByteBuffer imageData = BufferUtils.createByteBuffer(4 * imageDecoder.getWidth() * imageDecoder.getHeight()); imageDecoder.decode(imageData, imageDecoder.getWidth() * 4, PNGDecoder.Format.RGBA); imageData.flip(); System.err.println(Display.setIcon(new ByteBuffer[]{imageData}) == 0 ? "No se ha creado el icono" : "Se ha creado el icono"); The png file is a 128x128px with transparency. PNGDecoder is from the matthiasmann utility (de.matthiasmann.twl.utils). I'm using Mac OS, 10.8.4 with lwjgl 2.9.0. Thanks :)

    Read the article

  • Bubble shooter search alghoritm

    - by Fofole
    So I have a Matrix of NxM. At a given position (for ex. [2][5]) I have a value which represents a color. If there is nothing at that point the value is -1. What I need to do is after I add a new point, to check all his neighbours with the same color value and if there are more than 2, set them all to -1. If what I said doesn't make sense what I'm trying to do is an alghoritm which I use to destroy all the same color bubbles from my screen, where the bubbles are memorized in a matrix where -1 means no bubble and {0,1,2,...} represent that there is a bubble with a specific color. This is what I tried and failed: public class Testing { static private int[][] gameMatrix= {{3, 3, 4, 1, 1, 2, 2, 2, 0, 0}, {1, 4, 1, 4, 2, 2, 1, 3, 0, 0}, {2, 2, 4, 4, 3, 1, 2, 4, 0, 0}, {0, 1, 2, 3, 4, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; static int Rows=6; static int Cols=10; static int count; static boolean[][] visited=new boolean[15][15]; static int NOCOLOR = -1; static int color = 1; public static void dfs(int r, int c, int color, boolean set) { for(int dr = -1; dr <= 1; dr++) for(int dc = -1; dc <= 1; dc++) if(!(dr == 0 && dc == 0) && ok(r+dr, c+dc)) { int nr = r+dr; int nc = c+dc; // if it is the same color and we haven't visited this location before if(gameMatrix[nr][nc] == color && !visited[nr][nc]) { visited[nr][nc] = true; count++; dfs(nr, nc, color, set); if(set) { gameMatrix[nr][nc] = NOCOLOR; } } } } static boolean ok(int r, int c) { return r >= 0 && r < Rows && c >= 0 && c < Cols; } static void showMatrix(){ for(int i = 0; i < gameMatrix.length; i++) { System.out.print("["); for(int j = 0; j < gameMatrix[0].length; j++) { System.out.print(" " + gameMatrix[i][j]); } System.out.println(" ]"); } System.out.println(); } static void putValue(int value,int row,int col){ gameMatrix[row][col]=value; } public static void main(String[] args){ System.out.println("Initial Matrix:"); putValue(1, 4, 1); putValue(1, 5, 1); showMatrix(); for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; //reset count count = 0; //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, false); // get the contiguous count dfs(5,1,color,false); //if there are more than 2 set the color to NOCOLOR for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; if(count > 2) { //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, true); dfs(5,1,color,true); } System.out.println("Matrix after dfs:"); showMatrix(); } }

    Read the article

  • HLSL Shader not working right?

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? if so then how? I'm using C++. Here is my shader. float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • Android - Efficient way to draw tiles in OpenGL ES

    - by Maecky
    Hi, I am trying to write efficient code to render a tile based map in android. I load for each tile the corresponding bitmap (just one time) and then create the according tiles. I have designed a class to do this: public class VertexQuad { private float[] mCoordArr; private float[] mColArr; private float[] mTexCoordArr; private int mTextureName; private static short mCounter = 0; private short mIndex; As you can see, each tile has it's x,y location, a color array, texture coordinates and a texture name. Now, I want to render all my created tiles. To reduce the openGL api calls (I read somewhere that the state changes are costly and therefore I want to keep them to a minimum), I first want to hand ALL the coordinate-arrays, color-arrays and texture-coordinates over to OpenGL. After that I run two for loops. The first one iterates over the textures and binds the texture. The second for loop iterates over all Tiles and puts all tiles with the corresponding texture into an IndexBuffer. After the second for loop has finished, I call gl.gl_drawElements() whith the corresponding index buffer, to draw all tiles with the texture associated. For the next texture I do the same again. Now I run into some problems: Allocating and filling the FloatBuffers at the start of each rendering cycle costs very much time. I just run a test, where i wanted to put 400 coordinates into a FloatBuffer which took me about 200ms. My questions now are: Is there a better way, handling the coordinate and color structures? How is this correctly done, this is obviously not the optimal way? ;) thanks in advance, regards Markus

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Numbers not adding up? (What am I not understanding here?) [closed]

    - by Milo
    I have the following output: Short version: The last numbers on the S= lines increase by H and SHOULD theoretically be linearly decreasing, ex: -285,-290,-295...but the fourth one jumps to -252. Yet, every other number is linearly increasing. Why is that and how could I fix that? To explain the numbers, it comes from slider value changed. I have a slider whose value is used to generate the float on the next line. Everything should be growing linearly here. This value is used to determine the size of a flow layout and it is also used in conjunction with a scrollbar. But basically I have a background for the flow layout and that number is the start location for rendering it. The numbers should linearly change to create a smooth transition but when that one jumps, it looks weird on screen and I dont understand why the numbers are jumping every X slider value changes. Mathematically what could be causing this? Here is the code for rendering the background and the function that is called when value changes: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; int startY = childRect.getY(); int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); m_vScroll->setValue(newVal); } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { m_flow->setLocation(0,-val); } Any insight on mathematically why the anomaly might happen every Nth time would be helpful. I just dont understand why if every number linearly increates it jumps from -295 to -252! Thanks

    Read the article

  • Coordinate and positioning problem on iOS with cocos2d-x

    - by Vexille
    I'm using cocos2d-x alongside with Marmalade and running some tests and tutorials before starting an actual project with them. So far things are working reasonably well on the windows simulator, Android and even on Blackberry's Playbook, but on iOS devices (iPhone and iPad) the positioning seems to be off. To make things clearer, I put together a scene that just draws an image in the middle of the screen. It worked as expected on everything else, but this is the result I got on an iPhone: To get the coordinates for the center of the screen I'm using the VisibleRect class from the TestCpp sample. It just uses sharedOpenGLView to get the visible size and visible origin, and calculate the center from that. CCSprite* test = CCSprite::create("Ball.png", CCRectMake(0, 0, 80, 80) ); test->setPosition( ccp(VisibleRect::center().x, VisibleRect::center().y) ); this->addChild(test); Also I have a noBorder policy set on AppDelegate: CCEGLView::sharedOpenGLView()->setDesignResolutionSize(designSize.width, designSize.height, kResolutionNoBorder); One funny thing is that I tried to deploy the TestCpp sample project to some iOS devices and it worked reasonably well on the iPhone, but on the iPad the application was only being drawn on a small portion of the screen - just like what happened on the iPhone when I tried using the ShowAll policy.

    Read the article

  • Comparison between a value with static type Array and a possibly unrelated type Class

    - by Kaoru
    I got this error: Comparison between a value with static type Array and a possibly unrelated type Class. After i modify the class to many classes (before that, everything is on 1 class (all of the functions)), but after i move everything to many classes (all the functions is not on 1 class), that error appear. How to solve this? I am using AS3 and as3isolib Library. Here is the code after i modify the function: if (Constant.dude.y < Constant._numY) { if (Constant.dude.sprites != marioBackClass) { Constant.dude.sprites = [marioBackClass]; Constant.dudeDir = "Up"; } } Here is the code before i change the function to many classes: if (dude.y < ._numY) { if (dude.sprites.toString() != marioBackClass.toString()) { dude.sprites = [marioBackClass]; dudeDir = "Up"; } }

    Read the article

  • HUD layer not being added on my scene

    - by Shailesh_ios
    I have a CCScene which already holds my gameLayer and I am trying to add HUD layer on that.But the HUD layer is not getting added in my scene, I can say that because I have set up a CCLabel on HUD layer and when I run my project, I cannot see that label. Here's what I am doing : In my gameLayer: +(id) scene { CCScene *scene = [CCScene node]; GameScreen *layer = [GameScreen node]; [scene addChild: layer]; HUDclass * otherLayer = [HUDclass node]; [scene addChild:otherLayer]; layer.HC = otherLayer;// HC is reference to my HUD layer in @Interface of gameLayer return scene; } And then in my HUD layer I have just added a CCLabelTTF in its init method like this : -(id)init { if ((self = [super init])) { CCLabelTTF * label = [CCLabelTTF labelWithString:@"IN WEAPON CLASS" fontName:@"Arial" fontSize:15]; label.position = ccp(240,160); [self addChild:label]; } return self; } But now when I run my project I dont see that label, What am I doing wrong here ..? Any Ideas.. ? Thanks in advance for your time.

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • AABB - AABB Collision, which face do I hit?

    - by PeeS
    To allow my objects to slide when they collide, I need to : Know which face of the AABB they collide with. Calculate the normal to that face. Return the normal and calculate the impulse that to apply to the player's velocity. Question How can I calculate which face of the AABB I collided with, knowing that I have two AABB's colliding? One is the player and the other is a world object. Here's what that looks like (problem collision circled in white): Thank you for your help.

    Read the article

  • Heightmap and Textures

    - by Robert
    Im trying to find the "best way" to apply a texture to a heightmap with opengl 3.x. Its really hard to find something on google because tutorials are olds and they're all using different methods, im really lost and i dont know what to use at all. Here is my code that generates the heightmap (its basic) float[] vertexes = null; float[] textureCoords = null; for(int x = 0; x < this.m_size.width; x++) { for(int y = 0; y < this.m_size.height; y++) { vertexes ~= [x, 1.0f, y]; textureCoords ~= [cast(float)x / 50, cast(float)y / 50]; } } As you can see, i dont know how to apply the texture at all (i was using / 50 for my tests). Result of that code : I would like to have something very basic like : (you can find more pics in his blog) Edit : my texture size is 1024x1024.

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • Best practices in managing character states

    - by TheBroodian
    While in development of a character, I feel like I'm digging myself deeper into a hole every time I add more functionality to him, creating more bugs and it seems like my code is tripping over itself all over the place. What are the best practices when managing character states for a character that has a large selection of abilities and actions that they can perform, without their abilities interrupting each other and creating a mess overall?

    Read the article

  • Different bounding volumes for culling and collision detection

    - by Serthy
    Should an object in a 3D-engine use different bounding volumes for collision-detection (broad-phase) and culling? Basically class renderBounds and class physBounds versus class boundingVolume? Each of this classes then could either contain the same type of volumes (AABB's, kDOP's, sphere's etc.) or a special fitting one for the particular object. (note: without considering of using an external physics engine)

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >