Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 265/540 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • For normal mapping, why can we not simply add the tangent normal to the surface normal?

    - by sebf
    I am looking at implementing bump mapping (which in all implementations I have seen is really normal mapping), and so far all I have read says that to do this, we create a matrix to convert from world-space to tangent-space, in order to transform the lights and eye direction vectors into tangent space, so that the vectors from the normal map may be used directly in place of those passed through from the vertex shader. What I do not understand though, is why we cannot just use the normalised sum of the sampled-normal vector, and the surface-normal? (assuming we already transform and pass through the surface normal for the existing lighting functions) Take the diagram below; the normal is simply the deviation from the 'reference normal' for any given coordinate system, correct? And transforming the surface normal of a mapped surface from world space to tangent space makes it equivalent to the tangent space 'reference normal', no? If so, why do we transform all lighting vectors into tangent space, instead of simply transforming the sampled tangent once in the pixel shader?

    Read the article

  • The best tile based level design [on hold]

    - by ReallyGoodPie
    My current method for tile based levels is to put everything in an array like the following: grass = g sky = s house = h ... """ ["SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS"], ["SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS"], ["HHHHHHHHHHHHHHSSSSSSSSSSSSSSSS"], ["GGGGGGGGGGGGGGGGGGGGGGGGGGGGGG"], """ I would then run a for loop to pass these on to a sprite class, bliting the images to the screen. This is what I'd generally do in pygame when I am creating levels for a tile based RPG's. However, as I've gone on, I have added allot more sprites and image and it is seriously becoming more and more confusing to work with this and allot of mistakes are being made. What is the best alternative or other methods for doing this?

    Read the article

  • Platform for DS/Gameboy Dev - Managed Memory, Tools, and Unit Testing

    - by ashes999
    I'm interested in dabbling in Nintendo DS, 3DS, or GBA development. I would like to know what my (legal) options for development tools and IDEs are. In particular, I would not consider moving in this direction unless I can find: A programming language that has managed memory (garbage collection) A unit testing tool akin to JUnit, NUnit, etc. for unit tests I would also prefer if other tools exist, like code-coverage, etc. for that platform. But the main thing is managed memory and unit testing. What options are out there?

    Read the article

  • Collision filtering techniques

    - by Griffin
    I was wondering what efficient techniques are out there for mapping collision filtering between various bodies, sub-bodies, and so forth. I'm familiar with the simple idea of having different layers of 2D bodies, but this is not sufficient for more complex mapping: (Think of having sub-bodies of a body, such as limbs, collide with each other by placing them on the same layer, and then wanting to only have the legs collide with the ground while the arms would not) This can be solved with a multidimensional layer setup, but I would probably end up just creating more and more layers to the point where the simplicity and efficiency of layer filtering would be gone. Are there any more complex ways to solve even more complex situations than this?

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • Obtaining a HBITMAP/HICON from D2D Bitmap

    - by Tom
    Is there any way to obtain a HBITMAP or HICON from a ID2D1Bitmap* using Direct2D? I am using the following function to load a bitmap: http://msdn.microsoft.com/en-us/library/windows/desktop/dd756686%28v=vs.85%29.aspx The reason I ask is because I am creating my level editor tool and would like to draw a PNG image on a standard button control. I know that you can do this using GDI+: HBITMAP hBitmap; Gdiplus::Bitmap b(L"a.png"); b.GetHBITMAP(NULL, &hBitmap); SendMessage(GetDlgItem(hDlg, IDC_BUTTON1), BM_SETIMAGE, IMAGE_BITMAP, (LPARAM)hBitmap); Is there any equivalent, simple solution using Direct2D? If possible, I would like to render multiple PNG files (some with transparency) on a single button.

    Read the article

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • OpenGL ES Basic Fragment Shader help with transparency

    - by Chris
    I have just spent my first half hour playing with the shader language. I have modified the basic program I have which renders the texture, to allow me to colour the texture. varying vec2 texCoord; uniform sampler2D texSampler; /* Given the texture coordinates, our pixel shader grabs the corresponding * color from the texture. */ void main() { //gl_FragColor = texture2D(texSampler, texCoord); gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).xyz,1); } I have noticed how this affects my transparent textures, and I believe I am loosing the alpha channel which would explain why previously transparent area's appear totally black. If I use the following line instead, I am shown the transparent area's gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).aaa,1); How can I retain the transparency after this modification to the colour? I have seen various things about a .w property, and also luminous, but my tweaks with those and the .aaa property are not working XD

    Read the article

  • python Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or y increments by one at every loop until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases. Until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or y keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • 2D map/plane with nodes overlayed that supports panning, scaling and clicking on nodes

    - by garlicman
    I'm trying my hand at Android development and seem to be running into an invisible ceiling in trying to get what I want accomplished. Basically I'm trying to create an app that renders a 2D surface map that I can (pinch) zoom and pan. I'll have to place nodes on the surface of the map that will scale/zoom and pan in relation to the surface. I started out with a 2D ImageView approach and got as far as pinch zoom, pan and laying nodes as relative ImageViews, but all the methods I tried to get X,Y,W,H for the 2D surface were always off for some reason. Additionally, I was never able to scale the node ImageViews correctly, and as a result never got far enough to try and work out their X,Y scaled offset. So I decided to get back to 3D rendering. Conceptually pan/zoom is camera manipulation, so I don't have to mess with how to scale the 2D map or the nodes. But I need a starting point or sample to get me going that's close to what I'm trying to achieve. A sample on a translucent spinning cube isn't helping as much as I need it to. Any tips? Links, insults and sympathy are all welcome!

    Read the article

  • Why aren't tangent space normal maps completely blue?

    - by seahorse
    Why aren't normal maps just blue? I would think that normal maps should be predominantly blue in color because the Z component of the normal is represented by blue. Normals point out of the surface in the Z direction so we should see blue as the predominant colour since the Z component is dominant. By definition tangent space is perpendicular to the surface. At any point we should have the normal always pointing in the Z (blue direction) with no X (red direction) or Y (green direction). Thus the normal map (since it is a "normal map") should have the colour of the normals which is just blue (R = x = 0, G = y = 0, B = z = 1) with no shades in between. But normal maps are not so, and they have gradients of shades in them. Why is this so?

    Read the article

  • Simple iOS glDrawElements - BAD_ACCESS

    - by user699215
    You can copy paste this into the default OpenGl template created in Xcode. Why am I not seeing anything :-) It is strange as the glDrawArrays(GL_TRIANGLES, 0, 3); is working fine, but with glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); Is giving BAD_ACCESS? Copy paste this into Xcode default OpenGl template: ViewController #import "ViewController.h" #define BUFFER_OFFSET(i) ((char *)NULL + (i)) // Uniform index. enum { UNIFORM_MODELVIEWPROJECTION_MATRIX, UNIFORM_NORMAL_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_NORMAL, NUM_ATTRIBUTES }; @interface ViewController () { GLKMatrix4 _modelViewProjectionMatrix; GLKMatrix3 _normalMatrix; float _rotation; GLuint _vertexArray; GLuint _vertexBuffer; NSArray* arrayOfVertex; } @property (strong, nonatomic) EAGLContext *context; @property (strong, nonatomic) GLKBaseEffect *effect; - (void)setupGL; - (void)tearDownGL; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; GLKView *view = (GLKView *)self.view; view.context = self.context; view.drawableDepthFormat = GLKViewDrawableDepthFormat24; [self setupGL]; } - (void)dealloc { [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; if ([self isViewLoaded] && ([[self view] window] == nil)) { self.view = nil; [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } self.context = nil; } // Dispose of any resources that can be recreated. } GLuint vertexBufferID; GLuint indexBufferID; static const GLfloat vertices[9] = { -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5 }; static const GLubyte indices[3] = { 0, 1, 2 }; - (void)setupGL { [EAGLContext setCurrentContext:self.context]; // [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); // glGenVertexArraysOES(1, &_vertexArray); // glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &vertexBufferID); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glGenBuffers(1, &indexBufferID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, // Specifies the index of the generic vertex attribute to be modified. 3, // Specifies the number of components per generic vertex attribute. Must be 1, 2, 3, 4. GL_FLOAT, // GL_FALSE, // 0, // BUFFER_OFFSET(0)); // // glBindVertexArrayOES(0); } - (void)tearDownGL { [EAGLContext setCurrentContext:self.context]; glDeleteBuffers(1, &_vertexBuffer); glDeleteVertexArraysOES(1, &_vertexArray); self.effect = nil; } #pragma mark - GLKView and GLKViewController delegate methods - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } int i; - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.65f, 0.65f, 0.65f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glBindVertexArrayOES(_vertexArray); // Render the object with GLKit [self.effect prepareToDraw]; //glDrawArrays(GL_TRIANGLES, 0, 3); // Render the object again with ES2 // glDrawArrays(GL_TRIANGLES, 0, 3); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } @end

    Read the article

  • Mobile 3D engine renders alpha as full-object transparency

    - by Nils Munch
    I am running a iOS project using the isgl3d framework for showing pod files. I have a stylish car with 0.5 alpha windows, that I wish to render on a camera background, seeking some augmented reality goodness. The alpha on the windows looks okay, but when I add the object, I notice that it renders the entire object transparently, where the windows are. Including interior of the car. Like so (in example, keyboard can be seen through the dashboard, seats and so on. should be solid) The car interior is a seperate object with alpha 1.0. I would rather not show a "ghost car" in my project, but I haven't found a way around this. Have anyone encountered the same issue, and eventually reached a solution ?

    Read the article

  • Texture the quad with different parts of texture

    - by PolGraphic
    I have a 2D quad. Let say it's position is (5,10) and size is (7,11). I want to texture it with one texture, but using three different parts of it. I want to texture the part of quad from x = 5 to x = 7 with part of texture from U = 0 to U = 0.5 (replaying it after achieving 0.5, so I will have 4 same 0.5-lenght fragments). The second one with some other part of texture (also repeating it) and third in the same style. But, how to achieve it? I know that: float2 tc = fmod(input.TexCoord, textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy; //textureCoordinates.xy = fragments' offset Will give me the texture part replaying.

    Read the article

  • Rotate a particle system

    - by Blueski
    Languages / Libraries in use: C++, OpenGL, GLUT Okay, here's the deal. I've got a particle system which shoots out alpha blended textures to produce a flame. The system only keeps track of very basic things such as, time alive, life, xyz and spread. The direction in which the flames are currently moving in is purely based on other things which are going on in my code ( I assume ). My goal however, is to attach the flame to the camera (DONE) and have the flame pointing in the direction my camera is facing (NOT WORKING). I've tried glRotate for both x,y,z and I can't get it to work properly. I'm currently using gluLookAt to move the camera, and get the flame to follow the XYZ of the camera by calling glTranslatef(camX, camY - offset, camZ); Any suggestions on how I can rotate the direction of the flame with the camera would be greatly appreciated. Heres an image of what I've got: http://i.imgur.com/YhV4w.png Notes: Crosshair depicts where camera is facing if I turn the camera, flame doesn't follow the crosshair Also asked here: http://stackoverflow.com/questions/9560396/rotate-a-particle-system but was referred here

    Read the article

  • XNA Sprite Rotation Matrix - Moving Origin

    - by Jon
    I am currently grouping sprites together, then applying a rotation transformation on draw: private void UpdateMatrix(ref Vector2 origin, float radians) { Vector3 matrixorigin = new Vector3(origin, 0); _rotationMatrix = Matrix.CreateTranslation(-matrixorigin) * Matrix.CreateRotationZ(radians) * Matrix.CreateTranslation(matrixorigin); } Where the origin is the Centermost point of my group of sprites. I apply this transformation to each sprite in the group. My problem is that when I adjust the point of origin, my entire sprite group will re-position itself on screen. How could I differentiate the point of rotation used in the transformation, from the position of the sprite group? Is there a better way of creating this transformation matrix?

    Read the article

  • Why I'm getting the same result when deleting target?

    - by XNA
    In the following code we use target in the function: moon.mouseEnabled = false; sky0.addChild(moon); addEventListener(MouseEvent.MOUSE_DOWN, onDrag, false, 0, true); addEventListener(MouseEvent.MOUSE_UP, onDrop, false, 0, true); function onDrag(evt:MouseEvent):void { evt.target.addChild(moon); evt.target.startDrag(); } function onDrop(evt:MouseEvent):void { stopDrag(); } But if I rewrite this code without evt.target it still work. So what is the difference, am I going to get errors later in the run time because I didn't put target? If not then why some use target a lot while it works without it. function onDrag(evt:MouseEvent):void { addChild(moon); startDrag(); }

    Read the article

  • multipass shadow mapping renderer in XNA

    - by Nick
    I am wanting to implement a multipass renderer in XNA (additive blending combines the contributions from each light). I have the renderer working without any shadows, but when I try to add shadow mapping support I run into an issue with switching render targets to draw the shadow maps. When I switch render targets, I lose the contents of the backbuffer which ruins the whole additive blending idea. For example: Draw() { DrawAmbientLighting() foreach (DirectionalLight) { DrawDirectionalShadowMap() // <-- I lose all previous lighting contributions when I switch to the shadow map render target here DrawDirectionalLighting() } } Is there any way around my issue? (I could render all the shadow maps first, but then I have to make and hold onto a render target for each light that casts a shadow--is this the only way?)

    Read the article

  • Fog shader camera problem

    - by MaT
    I have some difficulties with my vertex-fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f { float4 pos : SV_POSITION; float4 grabUV : TEXCOORD0; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; float4 screenPos : TEXCOORD3; }; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_depth = v.texcoord.xy; o.grabUV = ComputeGrabScreenPos(o.pos); half index = v.vertex.z; o.screenPos = ComputeScreenPos(o.pos); o.interpolatedRay = mul(UNITY_MATRIX_MV, v.vertex); return o; } sampler2D _GrabTexture; float4 frag(v2f IN) : COLOR { float3 uv = UNITY_PROJ_COORD(IN.grabUV); float dpth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv)); dpth = LinearEyeDepth(dpth); float4 wsPos = (IN.screenPos + dpth * IN.interpolatedRay); // Here is the problem but how to fix it float fogVert = max(0.0, (wsPos.y - _Depth) * (_DepthScale * 0.1f)); fogVert *= fogVert; fogVert = (exp (-fogVert)); return fogVert; } Thanks a lot !

    Read the article

  • Tools for assembling textures into DDS files

    - by Nicol Bolas
    There are plenty of tools for making images. I'm not looking for one of those; I have many tools for creating an image. I've got tools for compressing images, generating mipmaps, and even for poking at their basic data format. My issue is with texture assembly. DDS files support cubemaps, array textures, and even cubemap arrays. But I don't know of a tool that can pack a series of images into a cubemap or the like. What tools are available for doing this kind of thing?

    Read the article

  • Problems with texture orientation in space

    - by frankie
    I am currently drawing texture in 3D space and have some problems with it's orientation. I'd like me textures always to be oriented with front face to user. My desirable result looks like Note, that text size stay without changes when we rotating world and stay oriented with front face to user. Now I can draw text in 3D space, but it is not oriented with front but rotating with world. Such results I got with following shaders: Vertex Shader uniform vec3 Position; void main() { gl_Position = vec4(Position, 1.0); } Geometry Shader layout(points) in; layout(triangle_strip, max_vertices = 4) out; out vec2 fsTextureCoordinates; uniform mat4 projectionMatrix; uniform mat4 modelViewMatrix; uniform sampler2D og_texture0; uniform float og_highResolutionSnapScale; uniform vec2 u_originScale; void main() { vec2 halfSize = vec2(textureSize(og_texture0, 0)) * 0.5 * og_highResolutionSnapScale; vec4 center = gl_in[0].gl_Position; center.xy += (u_originScale * halfSize); vec4 v0 = vec4(center.xy - halfSize, center.z, 1.0); vec4 v1 = vec4(center.xy + vec2(halfSize.x, -halfSize.y), center.z, 1.0); vec4 v2 = vec4(center.xy + vec2(-halfSize.x, halfSize.y), center.z, 1.0); vec4 v3 = vec4(center.xy + halfSize, center.z, 1.0); gl_Position = projectionMatrix * modelViewMatrix * v0; fsTextureCoordinates = vec2(0.0, 0.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v1; fsTextureCoordinates = vec2(1.0, 0.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v2; fsTextureCoordinates = vec2(0.0, 1.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v3; fsTextureCoordinates = vec2(1.0, 1.0); EmitVertex(); } Fragment Shader in vec2 fsTextureCoordinates; out vec4 fragmentColor; uniform sampler2D og_texture0; uniform vec3 u_color; void main() { vec4 color = texture(og_texture0, fsTextureCoordinates); if (color.a == 0.0) { discard; } fragmentColor = vec4(color.rgb * u_color.rgb, color.a); } Any ideas how to get my desirable result? EDIT 1: I make edit in my geometry shader and got part of lable drawn on screen at corner. But it is not rotating. .......... vec4 centerProjected = projectionMatrix * modelViewMatrix * center; centerProjected /= centerProjected.w; vec4 v0 = vec4(centerProjected.xy - halfSize, 0.0, 1.0); vec4 v1 = vec4(centerProjected.xy + vec2(halfSize.x, -halfSize.y), 0.0, 1.0); vec4 v2 = vec4(centerProjected.xy + vec2(-halfSize.x, halfSize.y), 0.0, 1.0); vec4 v3 = vec4(centerProjected.xy + halfSize, 0.0, 1.0); gl_Position = og_viewportOrthographicMatrix * v0; ..........

    Read the article

  • Resource management question. Resource containing resource

    - by bobenko
    I have resource manager handling as usual resource loading, unloading etc. With resources such an images, mesh no problem. But what to do when I have resource containing other resource (for example spriteFont contains reference to sprite and letters description). Should that sprite be added to resource manager? Or my spriteFont must be the only owner of that resource? Any thoughts on this. Have you faced with such problem? Thanks in advance.

    Read the article

  • Frame Interpolation issues for skeletal animation

    - by sebby_man
    I'm trying to animate in-between keyframes for skeletal animation but having some issues. Each joint is represented by a quaternion and there is no translation component. When I try to slerp between the orientations at the two key frames, I got a very wacky animation. I know my skinning equation is right because the animation is perfectly fine when the animation is directly on a keyframe rather than in-between two. I'm using glm's built in mix function to do the slerp, so I don't think there are any problems with the actual slerp implementation. There's really one thing left that could be wrong here. I must not be in the correct space to do slerp. Right now the orientations are in joint local space. Do I have to be in world space? In some other space along the way? I have the bind pose matrix and world-space transformation matrix at my disposal if those are needed.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >