Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 534/1169 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • OpenGL: glGetError() returns invalid enum after call to glewInit()

    - by malymato
    I use GLEW and freeglut. For some reason, after a call to glewInit(), glGetError() returns error code 1280. Reinstalling the drivers didn't help. I tried to disable glewExperimental, it had no effect. Code worked before, but I am not aware of any changes I could possibly make. Here's my code: int main(int argc, char* argv[]) { GLenum GlewInitResult, res; InitWindow(argc, argv); res = glGetError(); // res = 0 glewExperimental = GL_TRUE; GlewInitResult = glewInit(); res = glGetError(); // res = 1280 glutMainLoop(); exit(EXIT_SUCCESS); } void InitWindow(int argc, char* argv[]) { glutInit(&argc, argv); glutInitContextVersion(4, 0); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutInitContextProfile(GLUT_CORE_PROFILE); glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_GLUTMAINLOOP_RETURNS); glutInitWindowPosition(0, 0); glutInitWindowSize(CurrentWidth, CurrentHeight); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); WindowHandle = glutCreateWindow(WINDOW_TITLE); GLenum errorCheckValue = glGetError(); if (WindowHandle < 1) { fprintf(stderr, "ERROR: Could not create new rendering window.\n"); exit(EXIT_FAILURE); } glutReshapeFunc(ResizeFunction); glutDisplayFunc(RenderFunction); glutIdleFunc(IdleFunction); glutTimerFunc(0, TimerFunction, 0); glutCloseFunc(Cleanup); glutKeyboardFunc(KeyboardFunction); } Could someone tell me what I am doing wrong? Thanks.

    Read the article

  • Pong Collision Help in C# w/ XNA

    - by Ramses Brown
    Edit: My goal is to have it function like this: Ball hits 1st Quarter = rebounds higher (aka Y++) Ball hits 2nd Quarter = rebounds higher (using random value) Ball hits 3rd Quarter = rebounds lower (using random value) Ball hits 4th Quarter = rebounds lower (aka Y--) I'm currently using Rectangle Collision for my collision detection, and it's worked. Now I wish to expand it. Instead of it simply detecting whether or not the paddle/ball intersect, I want to make it so that it can determine what section of the paddle gets hit. I wanted it in 4 parts, with each having a different reaction to impact. My first thought is to base it on the Ball's Y position compared to the Paddle's Y position. But since I want it in 4 parts, I don't know how to do that. So it's essentially be if (ball.Y > Paddle.Y) { PaddleSection1 == true; } Except modified so that instead of being top half/bottom half, it's 1st Quarter, etc.

    Read the article

  • 3D Camera Problem

    - by Chris
    I allow the user to look around the scene by holding down the left mouse button and moving the mouse. The problem that I have is I can be facing one direction, I move the mouse up and the view tilts up, I move down and the view titles down. If I spin around 180 my left and right still works fine, but when I move the mouse up the view tilts down, and when I move the mouse down the view titles up. This is the code I am using, can anyone see what the problem with the logic is? var viewDir = g_math.subVector(target, g_eye); var rotatedViewDir = []; rotatedViewDir[0] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[0]) - (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[2]); rotatedViewDir[1] = viewDir[1]; rotatedViewDir[2] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[2]) + (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[0]); viewDir = rotatedViewDir; rotatedViewDir[0] = viewDir[0]; rotatedViewDir[1] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]) - (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]); rotatedViewDir[2] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]) + (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]); g_lookingDir = rotatedViewDir; var newtarget = g_math.addVector(rotatedViewDir, g_eye);

    Read the article

  • Open Source Errors on Apple Cruch

    - by BluFire
    I've been looking around and I finally got the full source code called Apple-Crunch from google code. But when I put it into my project, the source code included so many errors in the class files such as: cannot be resolved into a type the constructor is undefined the method method() is undefined for the type Sprite class.java I downloaded the source directly from the command-line and noticed errors popping up on my project. Since i couldn't figure out how to import the actual folder into my workspace(it wouldn't show up on existing projects) I decided to copy and overwrite the folders into the project. The Errors were still there so I looked at the class files and noticed that the classes with errors extended from 'RokonActivity'. I then proceeded to add to the libs folder the rokon library in hopes to fix the errors. Sadly it didn't work and now I don't what to do to fix the errors. How do i fix the errors without having to manually change the code? The source code should be fully functional so why is there errors?

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • How to move Objects smoothly like swimming arround

    - by philipp
    I have a Box2D project that is about to create a view where the user looks from the Sky onto Water. Or perhaps on a bathtub filled with water or something like this. The Object which holds the fluid actually does not matter, what matters is the movement of the bodies, because they should move like drops of grease on a soup, or wood on water, I can even imagine the the fluid is mercurial, extreme heavy and "lazy". How can I manipulate the bodies (every frame or time by time) to make them move like this? I started with randomly manipulation their linear velocity, but I turned out that this not very smooth and looks quite hard. Is it a better idea to check their velocity and apply impulses? Is there any example? Greetings philipp

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Meaning of offset in pygame Mask.overlap methods

    - by Alan
    I have a situation in which two rectangles collide, and I have to detect how much did they collide so so I can redraw the objects in a way that they are only touching each others edges. It's a situation in which a moving ball should hit a completely unmovable wall and instantly stop moving. Since the ball sometimes moves multiple pixels per screen refresh, it it possible that it enters the wall with more that half its surface when the collision is detected, in which case i want to shift it position back to the point where it only touches the edges of the wall. Here is the conceptual image it: I decided to implement this with masks, and thought that i could supply the masks of both objects (wall and ball) and get the surface (as a square) of their intersection. However, there is also the offset parameter which i don't understand. Here are the docs for the method: Mask.overlap Returns the point of intersection if the masks overlap with the given offset - or None if it does not overlap. Mask.overlap(othermask, offset) -> x,y The overlap tests uses the following offsets (which may be negative): +----+----------.. |A | yoffset | +-+----------.. +--|B |xoffset | | : :

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Why, on iOS, is glRenderbufferStorage appearing to fail?

    - by dugla
    On an iOS device (iPad) I decided to change the storage for my renderbuffer from the CAEAGLLayer that backs the view to explicit storage via glRenderbufferStorage. Sadly, the following code fails to result in a valid FBO. Can someone please tell me what I missed?: glGenFramebuffers(1, &m_framebuffer); glBindFramebuffer(GL_FRAMEBUFFER, m_framebuffer); glGenRenderbuffers(1, &m_colorbuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_colorbuffer); GLsizei width = (GLsizei)layer.bounds.size.width; GLsizei height = (GLsizei)layer.bounds.size.height; glRenderbufferStorage(m_colorbuffer, GL_RGBA8_OES, width, height); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, m_colorbuffer); Note: The layer size is valid and correct. This is solid production working rendering code. The only change I am making is the line glRenderbufferStorage(...) previously I did: [m_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer] Thanks, Doug

    Read the article

  • Constructive criticsm on my linear sampling Gaussian blur

    - by Aequitas
    I've been attempting to implement a gaussian blur utilising linear sampling, I've come across a few articles presented on the web and a question posed here which dealt with the topic. I've now attempted to implement my own Gaussian function and pixel shader drawing reference from these articles. This is how I'm currently calculating my weights and offsets: int support = int(sigma * 3.0) weights.push_back(exp(-(0*0)/(2*sigma*sigma))/(sqrt(2*pi)*sigma)); total += weights.back(); offsets.push_back(0); for (int i = 1; i <= support; i++) { float w1 = exp(-(i*i)/(2*sigma*sigma))/(sqrt(2*pi)*sigma); float w2 = exp(-((i+1)*(i+1))/(2*sigma*sigma))/(sqrt(2*pi)*sigma); weights.push_back(w1 + w2); total += 2.0f * weights[i]; offsets.push_back(w1 / weights[i]); } for (int i = 0; i < support; i++) { weights[i] /= total; } Here is an example of my vertical pixel shader: vec3 acc = texture2D(tex_object, v_tex_coord.st).rgb*weights[0]; vec2 pixel_size = vec2(1.0 / tex_size.x, 1.0 / tex_size.y); for (int i = 1; i < NUM_SAMPLES; i++) { acc += texture2D(tex_object, (v_tex_coord.st+(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; acc += texture2D(tex_object, (v_tex_coord.st-(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; } gl_FragColor = vec4(acc, 1.0); Am I taking the correct route with this? Any criticism or potential tips to improving my method would be much appreciated.

    Read the article

  • Rotation based on x coordinate and x velocity?

    - by Lewis
    -(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { float deceleration = 0.3f, sensitivity = 8.0f, maxVelocity = 150; // adjust velocity based on current accelerometer acceleration playerVelocity.x = playerVelocity.x * deceleration + acceleration.x * sensitivity; // we must limit the maximum velocity of the player sprite, in both directions (positive & negative values) playerVelocity.x = fmaxf(fminf(playerVelocity.x, maxVelocity), -maxVelocity); } Hi, I want to rotate my sprite based on the velocity and accelerometer input. My sprite can move along the X axis like so: <--------- sprite ----------- But it always faces forwards, if it is moving left I want it to point slightly to the left, the degree of how far it is pointing to be judged from the velocity. This should also work for the right. I tried using atan but as the y velocity and position is always the same the function returns 0, which doesn't rotate it at all. Any ideas? Regards, Lewis.

    Read the article

  • How to limit click'n'drag movement to an area?

    - by Vexille
    I apologize for the somewhat generic title. I'm really don't have much clue about how to accomplish what I'm trying to do, which is making it harder even to research a possible solution. I'm trying to implement a path marker of sorts (maybe there's a most suitable name for it, but this is the best I could come up with). In front of the player there will be a path marker, which will determine how the player will move once he finishes planning his turn. The player may click and drag the marker to the position they choose, but the marker can only be moved within a defined working area (the gray bit). So I'm now stuck with two problems: First of all, how exactly should I define that workable area? I can imagine maybe two vectors that have the player as a starting point to form the workable angle, and maybe those two arcs could come from circles that have their center where the player is, but I definetly don't know how to put this all together. And secondly, after I've defined the area where the marker can be placed, how can I enforce that the marker should only stay within that area? For example, if the player clicks and drags the marker around, it may move freely within the working area, but must not leave the boundaries of the area. So for example, if the player starts dragging the marker upwards, it will move upwards until it hits he end of the working area (first diagram below), but if after that the player starts dragging sideways, the marker must follow the drag while still within the area (second diagram below). I hope this wasn't all too confusing. Thanks, guys.

    Read the article

  • rotate sprite and shooting bullets from the end of a cannon

    - by Alberto
    Hi all i have a problem in my Andengine code, I need , when I touch the screen, shoot a bullet from the cannon (in the same direction of the cannon) The cannon rotates perfectly but when I touch the screen the bullet is not created at the end of the turret This is my code: private void shootProjectile(final float pX, final float pY){ int offX = (int) (pX-canon.getSceneCenterCoordinates()[0]); int offY = (int) (pY-canon.getSceneCenterCoordinates()[1]); if (offX <= 0) return ; if(offY>=0) return; double X=canon.getX()+canon.getWidth()*0,5; double Y=canon.getY()+canon.getHeight()*0,5 ; final Sprite projectile; projectile = new Sprite( (float) X, (float) Y, mProjectileTextureRegion,this.getVertexBufferObjectManager() ); mMainScene.attachChild(projectile); int realX = (int) (mCamera.getWidth()+ projectile.getWidth()/2.0f); float ratio = (float) offY / (float) offX; int realY = (int) ((realX*ratio) + projectile.getY()); int offRealX = (int) (realX- projectile.getX()); int offRealY = (int) (realY- projectile.getY()); float length = (float) Math.sqrt((offRealX*offRealX)+(offRealY*offRealY)); float velocity = (float) 480.0f/1.0f; float realMoveDuration = length/velocity; MoveModifier modifier = new MoveModifier(realMoveDuration,projectile.getX(), realX, projectile.getY(), realY); projectile.registerEntityModifier(modifier); } @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_MOVE){ double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); return true; } else if (pSceneTouchEvent.getAction() == TouchEvent.ACTION_DOWN){ final float touchX = pSceneTouchEvent.getX(); final float touchY = pSceneTouchEvent.getY(); double dx = pSceneTouchEvent.getX() - canon.getSceneCenterCoordinates()[0]; double dy = pSceneTouchEvent.getY() - canon.getSceneCenterCoordinates()[1]; double Radius = Math.atan2(dy,dx); double Angle = Radius * 180 / Math.PI; canon.setRotation((float)Angle); shootProjectile(touchX, touchY); } return false; } Anyone know how to calculate the coordinates (X,Y) of the end of the barrel to draw the bullet?

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • How do engines avoid "Phase Lock" (multiple objects in same location) in a Physics Engine?

    - by C0M37
    Let me explain Phase Lock first: When two objects of non zero mass occupy the same space but have zero energy (no velocity). Do they bump forever with zero velocity resolution vectors or do they just stay locked together until an outside force interacts? In my home brewed engine, I realized that if I loaded a character into a tree and moved them, they would signal a collision and hop back to their original spot. I suppose I could fix this by implementing impulses in the event of a collision instead of just jumping back to the last spot I was in (my implementation kind of sucks). But while I make my engine more robust, I'm just curious on how most other physics engines handle this case. Do objects that start in the same spot with no movement speed just shoot out from each other in a random direction? Or do they sit there until something happens? Which option is generally the best approach?

    Read the article

  • New way of integrating Openfeint in Cocos2d-x 0.12.0

    - by Ef Es
    I am trying to implement OpenFeint for Android in my cocos2d-x project. My approach so far has been creating a button that calls a static java method in class Bridge using jnihelper functions (jnihelper only accepts statics). Bridge has one singleton attribute of type OFAndroid, that is the class dynamically calling the Openfeint Api methods, and every method in the bridge just forwards it to the OFAndroid object. What I am trying to do now is to initialize the openfeint libraries in the main java class that is the one calling the static C++ libraries. My problem right now is that the initializing function void com.openfeint.api.OpenFeint.initialize(Context ctx, OpenFeintSettings settings, OpenFeintDelegate delegate) is not accepting the context parameter that I am giving him, which is a "this" reference to the main class. Main class extends from Cocos2dxActivity but I don't have any other that extends from Application. Any suggestions on fixing it or how to improve the architecture? EDIT: I am trying a new solution. Make the bridge class into an Application child, is called from Main object, initializes OpenFeint when created and it can call the OpenFeint functions instead of needing an additional class. The problem is I still get the error. 03-30 14:39:22.661: E/AndroidRuntime(9029): Caused by: java.lang.NullPointerException 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.content.ContextWrapper.getPackageManager(ContextWrapper.java:85) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.validateManifest(OpenFeintInternal.java:885) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initializeWithoutLoggingIn(OpenFeintInternal.java:829) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.internal.OpenFeintInternal.initialize(OpenFeintInternal.java:852) 03-30 14:39:22.661: E/AndroidRuntime(9029): at com.openfeint.api.OpenFeint.initialize(OpenFeint.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.NuroFeint.onCreate(NuroFeint.java:23) 03-30 14:39:22.661: E/AndroidRuntime(9029): at nurogames.fastfish.FastFish.onCreate(FastFish.java:47) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1069) 03-30 14:39:22.661: E/AndroidRuntime(9029): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2751)

    Read the article

  • problem adding bumpmap to textured gluSphere in JOGL

    - by ChocoMan
    I currently have one texture on a gluSphere that represents the Earth being displayed perfectly, but having trouble figuring out how to implement a bumpmap as well. The bumpmap resides in "res/planet/earth/earthbump1k.jpg".Here is the code I have for the regular texture: gl.glTranslatef(xPath, 0, yPath + zPos); gl.glColor3f(1.0f, 1.0f, 1.0f); // base color for earth earthGluSphere = glu.gluNewQuadric(); colorTexture.enable(); // enable texture colorTexture.bind(); // bind texture // draw sphere... glu.gluDeleteQuadric(earthGluSphere); colorTexture.disable(); // texturing public void loadPlanetTexture(GL2 gl) { InputStream colorMap = null; try { colorMap = new FileInputStream("res/planet/earth/earthmap1k.jpg"); TextureData data = TextureIO.newTextureData(colorMap, false, null); colorTexture = TextureIO.newTexture(data); colorTexture.getImageTexCoords(); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorMap.close(); } catch(IOException e) { e.printStackTrace(); System.exit(1); } // Set material properties gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T); } How would I add the bumpmap as well to the same gluSphere?

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • importing BaseGameUtils library

    - by David
    Hey :) I am trying to add the BaseGameUtils library to my workspace, I am using this guide: https://developers.google.com/games/services/android/init , I have downloaded from here :https://developers.google.com/games/services/downloads/ The BaseGameUtils sample but when I am trying to import it using Eclipse it gives me so many wrong things like Main,MainActivity and not the real BaseGameUtils, what is wrong here?

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • Checker AI in visual basic not working [on hold]

    - by Eugene Galkine
    I am trying to a make checkers in visual basic with ai. I am using the minimax algorithm (or at least what I understand of it) and it works, except the ai is retarded and plays like it is trying to loose and I tried to switch around the min and the max but the results are IDENTICAL. I am pissed of and have been trying to fix it for over a week now, I would really appreciate it if someone could help me out here. I have 3 years experience of programming (in Java, only about of month of VB experience) and I always am able to solve all my errors on my own so I don't know why I can't get this to work. The program is not at all optimized or anything at this point and is over 1.2K lines long, so here is the entire vb project instead: https://www.dropbox.com/sh/evii0jendn93ir2/9fntwH2dNW I would really appreciate any help I could get.

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >