Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 471/1042 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Procedural terrains in 3D: what has been done ? Are there common algo and/or theories about it ?

    - by jokoon
    Besides programming, modeling an environment takes a great deal of time. I don't know about the work time involved, for example, in a WoW dungeon level, or other beautiful city-like, future environment, jungles, fantasy, etc, but this kind of work is made from scratch by artists. What are the techniques involved in the TorchLight level randomizer, and does other titles have similarities with this ? Is there a family name for such techniques ?

    Read the article

  • Problem animating in Unity/Orthello 2D. Can't move gameObject

    - by Nelson Gregório
    I have a enemy npc that moves left and right in a corridor. It's animated with 2 sprites using Orthello 2D Framework. If I untick the animation's play on start and looping, the npc moves correctly. If I turn it on, the npc tries to move but is pulled back to his starting position again and again because of the animation loop. If I turn looping off during runtime, the npc moves correctly again. What did I do wrong? Here's the npc code if needed. using UnityEngine; using System.Collections; public class Enemies : MonoBehaviour { private Vector2 movement; public float moveSpeed = 200; public bool started = true; public bool blockedRight = false; public bool blockedLeft = false; public GameObject BorderL; public GameObject BorderR; void Update () { if (gameObject.transform.position.x < BorderL.transform.position.x) { started = false; blockedRight = false; blockedLeft = true; } if (gameObject.transform.position.x > BorderR.transform.position.x) { started = false; blockedLeft = false; blockedRight = true; } if(started) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedRight && !started && blockedLeft) { movement = new Vector2(1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } if(!blockedLeft && !started && blockedRight) { movement = new Vector2(-1, 0f); movement *= Time.deltaTime*moveSpeed; gameObject.transform.Translate(movement.x,movement.y, 0f); } } }

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • Platform game collisions with Block

    - by Sri Harsha Chilakapati
    I am trying to create a platform game and doing wrong collision detection with the blocks. Here's my code // Variables GTimer jump = new GTimer(1000); boolean onground = true; // The update method public void update(long elapsedTime){ MapView.follow(this); // Add the gravity if (!onground && !jump.active){ setVelocityY(4); } // Jumping if (isPressed(VK_SPACE) && onground){ jump.start(); setVelocityY(-4); onground = false; } if (jump.action(elapsedTime)){ // jump expired jump.stop(); } // Horizontal movement setVelocityX(0); if (isPressed(VK_LEFT)){ setVelocityX(-4); } if (isPressed(VK_RIGHT)){ setVelocityX(4); } } // The collision method public void collision(GObject other){ if (other instanceof Block){ // Determine the horizontal distance between centers float h_dist = Math.abs((other.getX() + other.getWidth()/2) - (getX() + getWidth()/2)); // Now the vertical distance float v_dist = Math.abs((other.getY() + other.getHeight()/2) - (getY() + getHeight()/2)); // If h_dist > v_dist horizontal collision else vertical collision if (h_dist > v_dist){ // Are we moving right? if (getX()<other.getX()){ setX(other.getX()-getWidth()); } // Are we moving left? else if (getX()>other.getX()){ setX(other.getX()+other.getWidth()); } } else { // Are we moving up? if (jump.active){ jump.stop(); } // We are moving down else { setY(other.getY()-getHeight()); setVelocityY(0); onground = true; } } } } The problem is that the object jumps well but does not fall when moved out of platform. Here's an image describing the problem. I know I'm not checking underneath the object but I don't know how. The map is a list of objects and should I have to iterate over all the objects??? Thanks

    Read the article

  • Realtime rendering using a ray tracing engine

    - by Keyhan Asghari
    I want to render an object that has a mesh with one million hexagonal elements(100 * 100 * 100). Lights, shadows and textures is not important and each element has a solid color. and finally, the actions I want to have, is simply rotating the object, zooming and panning. I am wondering what ray tracing engine is better for my conditions. or, do I have to take another approach? any help will be appreciated.

    Read the article

  • How to make Pokémon White 3D effect?

    - by Pipo
    I just wondered how to create a 3D effect similar to Pokemon White/Black? It seems to be not polygon based, but created just with sprites. If the perspective changes the sprites stay sharp and don't get blurred. How can I archive this? Source: https://www.youtube.com/watch?v=fZEPUPYOnRc&feature=youtube_gdata_player Edit: Wow, two downvotes because I used a video instead of screenshots? Don't get me wrong, I thank you, because you want to help me, but the 3D effect can be better understand in motion. Anyway, here is a screenshot: http://wearearcade.com/wp-content/uploads/2011/03/pokemon-black-white-starter-town.jpg So, if this is a hardware limitation, how can I archive this o na different hardware, e.g. a HTML5 game? Thank you.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Techniques for separating game model from presentation

    - by liortal
    I am creating a simple 2D game using XNA. The elements that make up the game world are what i refer to as the "model". For instance, in a board game, i would have a GameBoard class that stores information about the board. This information could be things such as: Location Size Details about cells on the board (occupied/not occupied) etc This object should either know how to draw itself, or describe how to draw itself to some other entity (renderer) in order to be displayed. I believe that since the board only contains the data+logic for things regarding it or cells on it, it should not provide the logic of how to draw things (separation of concerns). How can i achieve a good partitioning and easily allow some other entity to draw it properly? My motivations for doing so are: Allow multiple "implementations" of presentation for a single game entity Easier porting to other environments where the presentation code is not available (for example - porting my code to Unity or other game technology that does not rely on XNA).

    Read the article

  • Blending transition in cocos2d

    - by fiddler
    In my cocos2d-iphone game, I have 2 backgrounds (CCnodes), each containing a quite complex hierarchy of sprites. I would like to make a smooth transition between them: initially, only the first background is visible at the end, only the second one is visible Is there a good way to set the opacity of a full hierarchy of sprites ? I tried to recursively set the opacity of all the contained sprites. It kinda works except that: i guess it's not very efficient i would like the opacity of overlapping sprites to be 'merged' (as if the background was one single big sprite)

    Read the article

  • Depth buffer values reset on change shader?

    - by bobobobo
    I have 2 different shaders, and when I change the shader (glUseProgram), it seems that the depth information is lost, because everything drawn with the 2nd shader appears completely on top of anything drawn by the first shader. If I switch the order of shader use/drawing, then it's the same (the last drawn object always appears on top of the first drawn object if there is a shader change between the 2 objects, even if the last drawn object is further away)

    Read the article

  • Image 1 becomes image 2 with sliding effect from left to right?

    - by Paul
    I would like to show a second image appearing while a "door" is closing on my character. I've got my character in the middle of the screen and a door coming from the left. When the door passes my character, I would like to have this second image appearing little by little. So far, I've gotten by with fadingOut the character and then fadingIn my second image of the character at the same position when the door is completely closed, but I would like to have both of them at the same time. (the effect that image 1 becomes image 2 when the door is sliding from left to right). Would you know how to do this with Cocos2d? Here are the images : at first, the character is blue, and the door is coming from the left : Then, behind the black door, the character becomes red, but only behind this door, so it stays blue when the door is not on him, and will become completely red when the door passes the character : EDIT : with this code, the black door hides the red and blue rectangles : (And if i add each of my layers at a different depth, and only use GL_LESS, same thing) blue.position = ccp( size.width*0.5 , size.height/2 ); red.position = ccp( size.width*0.46 , size.height/2 ); black.position = ccp( size.width*0.1 , size.height/2 ); glEnable(GL_DEPTH_TEST); [batch addChild:red z:0]; [batch addChild:black z:2]; glDepthFunc(GL_GREATER); [batch addChild:blue z:1]; glDepthFunc(GL_LESS); id action1 = [CCMoveTo actionWithDuration:3 position:ccp(size.width,size.height/2)]; [black runAction: [CCSequence actions:action1, nil]];

    Read the article

  • Avoid double compression of resources

    - by user1095108
    I am using .pngs for my textures and am using a virtual file system in a .zip file for my game project. This means my textures are compressed and decompressed twice. What are the solutions to this double compression problem? One solution I've heard about is to use .tgas for textures, but it seems ages ago, since I've heard that. Another solution is to implement decompression on the GPU and, since that is fast, forget about the overhead.

    Read the article

  • Calculating up-vector to avoid gimbal lock using euler angles

    - by jessejuicer
    I wish to orbit a camera around a sphere, yet the problem is that when the camera rotates so that it is at the north pole (and pointing down) or the south pole (and pointing up) of the sphere the camera doesn't handle itself very well. It spins rapidly until arriving 180 degrees in the opposite direction. I believe this is known as gimbal lock. I understand you can avoid this problem using quaternions. But I also read in another forum that it's possible to avoid this easily using euler angles as well. Which I would prefer to do. It was said that all you need to do is "calculate a proper up-vector every frame, and that avoids the problem entirely." Well, I tried aligning the up-vector with the vertical axis of the camera whenever the camera changed orientation, but this didn't seem to work. Meaning that the up-vector followed exactly the orientation of the camera's y-axis (or it's up vector), instead of using a constant up-vector aligned to the up-vector of the world (0, 1, 0). How exactly do I go about calculating a proper up-vector as my camera orientation changes to avoid the gimbal lock problem mentioned above?

    Read the article

  • With Slick, how to change the resolution during gameplay?

    - by TheLima
    I am developing a tile-based strategy game using Java and the Slick API. So far so good, but I've come to a standstill on my options menu. I have plans for the user to be able to change the resolution during gameplay (it is pretty common, after all). I can already change to fullscreen and back to windowed, this was pretty simple... //"fullScreenOption" is a checkbox-like button. if (fullScreenOption.isMouseOver(mouseX, mouseY)) { if (input.isMouseButtonDown(Input.MOUSE_LEFT_BUTTON)) { fullScreenOption.state = !fullScreenOption.state; container.setFullscreen(fullScreenOption.state); } } But the container class (Implemented by Slick, not me), contrary to my previous beliefs, does not seem to have any resolution-change functions! And that's pretty much the situation...I know it's possible, but i don't know how to do it, nor what is the class responsible! The AppGameContainer class, used on the very start of the game's initialization, is the only place with any functions for changing the display-mode that I've found so far, but it's only used at the very start, and i haven't found a way to travel back to it from my options menu. //This is my implementation of it... public static void main(String[] args) throws SlickException { AppGameContainer app = new AppGameContainer(new Main()); // app.setTargetFrameRate(60); app.setVSync(true); app.setDisplayMode(800, 600, false); app.start(); } I can define it as a static global on the Main, but it's probably a (very) bad way to do it...

    Read the article

  • GUI for DirectX

    - by DeadMG
    I'm looking for a GUI library built on top of DirectX- preferably 9, but I can also do 11. I've looked at stuff like DXUT, but it's way too much for me- I'm only needing some UI controls which I would rather not write (and debug) myself, and their need to keep a C-compatible API is definitely a big downside. I'd rather look at UI libs that are designed to be integrated into an existing DirectX-based system, rather than forming the basis of a system. Any recommendations?

    Read the article

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • What does a Game Designer do? what skills do they need?

    - by xenoterracide
    I know someone who is thinking about getting into game design, and I wondered, what does the job game designer entail? what tools do you have to learn how to use? what unique skills do you need? what exactly is it you'd do from day to day. I may be wording this a bit wrong because I'm not sure if the college program is become a game designer or learn game design. but I think the same questions apply either way.

    Read the article

  • How to use OpenGL functions from multiples thread?

    - by Robert
    I'm writing a small game using OpenGL. I'm implementing basic networking in this game and I'm facing a problem. I have a thread in my client socket class that check for available data, when there are data I raise an event like this : immutable int len = this.m_socket.receive(data); if(len > 0) { this.m_onDataEvent(data); } Then on my game class, I have a function that handle and parse data like this : switch(msgId) { case ProtocolID.CharacterData: // Load terrain with opengl, character model.... Im not able to call opengl functions because my opengl context is created from a different thread. But I really don't know how I can solve this problem, I tried Google but it's really hard to find a solution. I'm using D programming language if it can help.

    Read the article

  • XNA model drawing problem

    - by user1990950
    When using this code: public static void DrawModel(Model model, Vector3 position, Vector3 offset, float xRotation, float yRotation, float zRotation, float allrot, float xScale, float yScale, float zScale) { position.Y *= -1; offset.Y *= -1; Matrix worldMatrix = ((Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation))) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation))) * (Matrix.CreateTranslation(offset) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot))) * Matrix.CreateScale(xScale, yScale, zScale); worldMatrix *= Matrix.CreateTranslation(position) * theCamera.GetTransformation() * Matrix.CreateTranslation(new Vector3(-(graphics.GraphicsDevice.Viewport.Width / 2), graphics.GraphicsDevice.Viewport.Height / 2, 0)); foreach (ModelMesh mesh in model.Meshes) { for (int i = 0; i < mesh.Effects.Count; i++) { ((BasicEffect)mesh.Effects[i]).EnableDefaultLighting(); ((BasicEffect)mesh.Effects[i]).World = worldMatrix; ((BasicEffect)mesh.Effects[i]).View = viewMatrix; ((BasicEffect)mesh.Effects[i]).Projection = projectionMatrix; } mesh.Draw(); } } The model rotates and then scales. It should scale and then rotate, but whenever I try to change it, it won't work.

    Read the article

  • andEngine dynamic sprites

    - by Blucreation
    Ive just started with andEngine the past week and i only started learning java/android 3 weeks. I can use a for loop to add multiple sprites to the screen but when i try to check collisions on them it only does it to one and not the rest. I want to be able to add a specific number for sprites made from the same texture to the scene, add collision detection to them and also make them slide across the screen (im making a game where you avoid the obstacles). My simple code: private void createobstacle(float pX, float pY) { obstacle = new AnimatedSprite(pX, pY, this.mObjTextureRegion.deepCopy(), getVertexBufferObjectManager()); obstacle.setScale(MathUtils.random(0.5f, 3f)); scene.attachChild(obstacle); } private void createobstacle(int num) { for(int i=0; i<=num; i++ ) { final float xPos = MathUtils.random(30.0f, (CAMERA_WIDTH - 30.0f)); final float yPos = MathUtils.random(30.0f, (CAMERA_HEIGHT - 30.0f)); createobstacle(xPos, yPos); } } Ive read about arrays but i cannot find any tutorials about anything im stuck with. Any help would be great!

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >