Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 390/863 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • Need help about Drag a sprite with Animation (Cocos2d)

    - by Zishan
    I want to play an animation when someone drag a sprite from it's default position to another selected position. If he drag half of the selected position then animation will be play half. Example, I have 15 Frames of a animation and have a projectile arm. The projectile arm can be rotate maximum 30°, if someone rotate the arm 2° then animation sprite will be show 2nd frame, if rotate 12° then animation sprite will be show 6th frame.... and so on. Also when he release the arm, then the arm will be reverse back to it's default position and animation frames also will be reverse back to it's default first frame. I am new on cocos2d.I know how to make an animation and how to drag a sprite but I have no idea how to do this. Can anyone Please give me any idea or any tutorial how to do this, it will be very helpful for me. Thank you in advance.

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Bullet Physic: Transform body after adding

    - by Mathias Hölzl
    I would like to transform a rigidbody after adding it to the btDiscreteDynamicsWorld. When I use the CF_KINEMATIC_OBJECT flag I am able to transform it but it's static (no collision response/gravity). When I don't use the CF_KINEMATIC_OBJECT flag the transform doesn't gets applied. So how to I transform non-static objects in bullet? DemoCode: btBoxShape* colShape = new btBoxShape(btVector3(SCALING*1,SCALING*1,SCALING*1)); /// Create Dynamic Objects btTransform startTransform; startTransform.setIdentity(); btScalar mass(1.f); //rigidbody is dynamic if and only if mass is non zero, otherwise static bool isDynamic = (mass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) colShape->calculateLocalInertia(mass,localInertia); btDefaultMotionState* myMotionState = new btDefaultMotionState(); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,colShape,localInertia); btRigidBody* body = new btRigidBody(rbInfo); body->setCollisionFlags(body->getCollisionFlags()|btCollisionObject::CF_KINEMATIC_OBJECT); body->setActivationState(DISABLE_DEACTIVATION); m_dynamicsWorld->addRigidBody(body); startTransform.setOrigin(SCALING*btVector3( btScalar(0), btScalar(20), btScalar(0) )); body->getMotionState()->setWorldTransform(startTransform);

    Read the article

  • Aw, Snap! in Google Chrome [on hold]

    - by D. S. Schneider
    Just wondering if anyone else's experiencing the "Aw, Snap!" bug in Google Chrome. I'm developing a brand new engine which occasionaly triggers this bug and as far as I know, there's nothing one can do to find out what actually triggered the issue. I've also tested my engine with Firefox, which runs just fine. Anyway, just wanted to know if someone else is facing this while developing games for Google Chrome and has a clue about what can be done to avoid it. I'm using plain JavaScript and the audio and canvas elements from HTML5. Thanks!

    Read the article

  • Is it possible to construct a cube with fewer than 24 vertices

    - by Telanor
    I have a cube-based world like Minecraft and I'm wondering if there's a way to construct a cube with fewer than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new DX11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • TileMapRenderer in libGDX not drawing anything

    - by Benjamin Dengler
    So I followed the tutorial on the libGDX wiki to draw Tiled maps but it doesn't seem to render anything. Here's how I setup my OrthographicCamera and load the map: camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); map = TiledLoader.createMap(Gdx.files.internal("maps/test.tmx")); atlas = new TileAtlas(map, Gdx.files.internal("maps")); tileMapRenderer = new TileMapRenderer(map, atlas, 8, 8); And here is my render function: Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); tileMapRenderer.render(camera); Also I did pack the tile map using the TiledMapPacker. I'm completely stumped... am I missing anything obvious here? EDIT: While debugging I noticed that the TileAtlas seems to be empty, which I guess shouldn't be the case, but I have no idea why it's empty.

    Read the article

  • Component-wise GLSL vector branching

    - by Gustavo Maciel
    I'm aware that it usually is a BAD idea to operate separately on GLSL vec's components separately. For example: //use instrinsic functions, they do the calculation on 4 components at a time. float dot = v1.x*v2.x + v1.y * v2.y + v1.z * v2.z; //NEVER float dot = dot(v1, v2); //YES //Multiply one by one is not good too, since the ALU can do the 4 components at a time too. vec3 mul = vec3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z); //NEVER vec3 mul = v1 * v2; I've been struggling thinking, are there equivalent operations for branching? For example: vec4 Overlay(vec4 v1, vec4 v2, vec4 opacity) { bvec4 less = lessThan(v1, vec4(0.5)); vec4 blend; for(int i = 0; i < 4; ++i) { if(less[i]) blend[i] = 2.0 * v1[i]*v2[i]; else blend[i] = 1.0 - 2.0 * (1.0 - v1[i])*(1.0 - v2[i]); } return v1 + (blend-v1)*opacity; } This is a Overlay operator that works component wise. I'm not sure if this is the best way to do it, since I'm afraid these for and if can be a bottleneck later. Tl;dr, Can I branch component wise? If yes, how can I optimize that Overlay function with it?

    Read the article

  • Generating geometry when using VBO

    - by onedayitwillmake
    Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads. I am doing this by storing a STD::Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays. I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader / VBO approach

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Physics timestep questions

    - by SSL
    I've got a projectile working perfectly using the code below: //initialised in loading screen 60 is the FPS - projectilEposition and velocity are Vector3 types gravity = new Vector3(0, -(float)9.81 / 60, 0); //called every frame projectilePosition += projectileVelocity; This seems to work fine but I've noticed in various projectile examples I've seen that the elapsedtime per update is taken into account. What's the difference between the two and how can I convert the above to take into account the elapsedtime? (I'm using XNA - do I use ElapsedTime.TotalSeconds or TotalMilliseconds)? Edit: Forgot to add my attempt at using elapsedtime, which seemed to break the physics: projectileVelocity.Y += -(float)((9.81 * gameTime.ElapsedGameTime.TotalSeconds * gameTime.ElapsedGameTime.TotalSeconds) * 0.5f); Thanks for the help

    Read the article

  • Who owns the intellectual property to Fragile Allegiance?

    - by analytik
    Fragile Allegiance was developed by Gremlin Interactive, which was later acquired by Infogrames (Atari). I couldn't find any details of the acquisition though. The only interesting thing I have found online is that the owner of the registered trademark Fragile Allegiance is Interplay, who published Fragile Allegiance. However, the only copyright note I've found was in one installation .ini file, claiming it for Gremlin. What are the common business practices when it comes to old, unused IPs? What do publishers/developers actually need to legally claim an intellectual property? Does anyone have an experience with contacting big publishers with copyright/IP inquiries? Related legal question.

    Read the article

  • 2D object-aligned bounding-box intersection test

    - by AshleysBrain
    Hi all, I have two object-aligned bounding boxes (i.e. not axis aligned, they rotate with the object). I'd like to know if two object-aligned boxes overlap. (Edit: note - I'm using an axis-aligned bounding box test to quickly discard distant objects, so it doesn't matter if the quad routine is a little slower.) My boxes are stored as four x,y points. I've searched around for answers, but I can't make sense of the variable names and algorithms in examples to apply them to my particular case. Can someone help show me how this would be done, in a clear and simple way? Thanks. (The particular language isn't important, C-style pseudo code is OK.)

    Read the article

  • Draw multiple objects with textures

    - by Simplex
    I want to draw cubes using textures. void OperateWithMainMatrix(ESContext* esContext, GLfloat offsetX, GLfloat offsetY, GLfloat offsetZ) { UserData *userData = (UserData*) esContext->userData; ESMatrix modelview; ESMatrix perspective; //Manipulation with matrix ... glVertexAttribPointer(userData->positionLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //in cubeFaces coordinates verticles cube glVertexAttribPointer(userData->normalLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //for normals (use in fragment shaider for textures) glEnableVertexAttribArray(userData->positionLoc); glEnableVertexAttribArray(userData->normalLoc); // Load the MVP matrix glUniformMatrix4fv(userData->mvpLoc, 1, GL_FALSE, (GLfloat*)&userData->mvpMatrix.m[0][0]); //Bind base map glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, userData->baseMapTexId); //Set the base map sampler to texture unit to 0 glUniform1i(userData->baseMapLoc, 0); // Draw the cube glDrawArrays(GL_TRIANGLES, 0, 36); } (coordinates transformation is in OperateWithMainMatrix() ) Then Draw() function is called: void Draw(ESContext *esContext) { UserData *userData = esContext->userData; // Set the viewport glViewport(0, 0, esContext->width, esContext->height); // Clear the color buffer glClear(GL_COLOR_BUFFER_BIT); // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } This work fine, but if I try to draw multiple cubes (next code for example): void Draw(ESContext *esContext) { ... // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 2.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -2.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } A side faces overlapes frontal face. The side face of the right cube overlaps frontal face of the center cube. How can i remove this effect and display miltiple cubes without it?

    Read the article

  • Pathfinding in multi goal, multi agent environment

    - by Rohan Agrawal
    I have an environment in which I have multiple agents (a), multiple goals (g) and obstacles (o). . . . a o . . . . . . . o . g . . a . . . . . . . . . . o . . . . o o o o . g . . o . . . . . . . o . . . . o . . . . o o o o a What would an appropriate algorithm for pathfinding in this environment? The only thing I can think of right now, is to Run a separate version of A* for each goal separately, but i don't think that's very efficient.

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • How to move Objects smoothly like swimming arround

    - by philipp
    I have a Box2D project that is about to create a view where the user looks from the Sky onto Water. Or perhaps on a bathtub filled with water or something like this. The Object which holds the fluid actually does not matter, what matters is the movement of the bodies, because they should move like drops of grease on a soup, or wood on water, I can even imagine the the fluid is mercurial, extreme heavy and "lazy". How can I manipulate the bodies (every frame or time by time) to make them move like this? I started with randomly manipulation their linear velocity, but I turned out that this not very smooth and looks quite hard. Is it a better idea to check their velocity and apply impulses? Is there any example? Greetings philipp

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • Why does glGetString returns a NULL string

    - by snape
    I am trying my hands at GLFW library. I have written a basic program to get OpenGL renderer and vendor string. Here is the code #include <GL/glew.h> #include <GL/glfw.h> #include <cstdio> #include <cstdlib> #include <string> using namespace std; void shutDown(int returnCode) { printf("There was an error in running the code with error %d\n",returnCode); GLenum res = glGetError(); const GLubyte *errString = gluErrorString(res); printf("Error is %s\n", errString); glfwTerminate(); exit(returnCode); } int main() { // start GL context and O/S window using GLFW helper library if (glfwInit() != GL_TRUE) shutDown(1); if (glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW_WINDOW) != GL_TRUE) shutDown(2); // start GLEW extension handler glewInit(); // get version info const GLubyte* renderer = glGetString (GL_RENDERER); // get renderer string const GLubyte* version = glGetString (GL_VERSION); // version as a string printf("Renderer: %s\n", renderer); printf("OpenGL version supported %s\n", version); // close GL context and any other GLFW resources glfwTerminate(); return 0; } I googled this error and found out that we have to initialize the OpenGL context before calling glGetString(). Although I have initialized OpenGL context using glfwInit() but still the function returns a NULL string. Any ideas? Edit I have updated the code with error checking mechanisms. This code on running outputs the following There was an error in running the code with error 2 Error is no error

    Read the article

  • Unable to create xcode project again from unity

    - by Engr Anum
    I am using unity 4.5.3. I already created/built xcode project from unity. However, there were some strange linking errors I was unable to solve, so I decided to delete my xcode project and rebuilt it again from unity. Unfortunately, whenever I try to build the project, just empty project folder is created. There is nothing inside it. I don't know why it is happening. Please tell me how can I create xcode project again. May be I am missing small thing. Thanks.

    Read the article

  • Mesh with quads to triangle mesh

    - by scape
    I want to use Blender for making models yet realize some of the polygons are not triangles but contain quads or more (example: cylinder top and bottom). I could export the the mesh as a basic mesh file and import it in to an openGL application and workout rendering the quads as tris, but anything with more than 4 vert indices is beyond me. Is it typical to convert the mesh to a triangle-based mesh inside blender before exporting it? I actually tried this through the quads_convert_to_tris method within a blender py script and the top of the cylinder does not look symmetrical. What is typically done to render a loaded mesh as a tri?

    Read the article

  • Can't get LWJGL lighting to work

    - by Zarkonnen
    I'm trying to enable lighting in lwjgl according to the method described by NeHe and this post. However, no matter what I try, all faces of my shapes always receive the same amount of light, or, in the case of a spinning shape, the amount of lighting seems to oscillate. All faces are lit up by the same amount, which changes as the pyramid rotates. Concrete example (apologies for the length): Note how all panels are always the same brightness, but the brightness varies with the pyramid's rotation. This is using lwjgl 2.8.3 on Mac OS X. package com; import com.zarkonnen.lwjgltest.Main; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.opengl.GL11; import org.newdawn.slick.opengl.Texture; import org.newdawn.slick.opengl.TextureLoader; import org.lwjgl.util.glu.*; import org.lwjgl.input.Keyboard; import java.nio.FloatBuffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; /** * * @author penguin */ public class main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(800, 600)); Display.setTitle("3D Pyramid"); Display.create(); } catch (Exception e) { } initGL(); float rtri = 0.0f; Texture texture = null; try { texture = TextureLoader.getTexture("png", Main.class.getResourceAsStream("tex.png")); } catch (Exception ex) { ex.printStackTrace(); } while (!Display.isCloseRequested()) { // Draw a Triangle :D GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); GL11.glLoadIdentity(); GL11.glTranslatef(0.0f, 0.0f, -10.0f); GL11.glRotatef(rtri, 0.0f, 1.0f, 0.0f); texture.bind(); GL11.glBegin(GL11.GL_TRIANGLES); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(0.0f, 1.0f, 0.0f); GL11.glTexCoord2f(-1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glTexCoord2f(1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); GL11.glBegin(GL11.GL_QUADS); GL11.glVertex3f(1.0f, -1.0f, 1.0f); GL11.glVertex3f(1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, -1.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); GL11.glEnd(); Display.update(); rtri += 0.05f; // Exit-Key = ESC boolean exitPressed = Keyboard.isKeyDown(Keyboard.KEY_ESCAPE); if (exitPressed) { System.out.println("Escape was pressed!"); Display.destroy(); } } Display.destroy(); } private static void initGL() { GL11.glEnable(GL11.GL_LIGHTING); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, ((float) 800) / ((float) 600), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0f); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); float lightAmbient[] = {0.5f, 0.5f, 0.5f, 1.0f}; // Ambient Light Values float lightDiffuse[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Diffuse Light Values float lightPosition[] = {0.0f, 0.0f, 2.0f, 1.0f}; // Light Position ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glLight(GL11.GL_LIGHT1, GL11.GL_AMBIENT, (FloatBuffer) temp.asFloatBuffer().put(lightAmbient).flip()); // Setup The Ambient Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_DIFFUSE, (FloatBuffer) temp.asFloatBuffer().put(lightDiffuse).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT1, GL11.GL_POSITION, (FloatBuffer) temp.asFloatBuffer().put(lightPosition).flip()); // Position The Light GL11.glEnable(GL11.GL_LIGHT1); // Enable Light One } }

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >