Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 455/1020 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Modeling player mechanics with a finite state machine

    - by K..
    I have three states standing walking jumping When I press D standing transitions to walking. The velocity will be set to a defined value and the player moves. When I release D walking transitions back to standing, which sets the velocity back to 0. When I press W and the state is walking it transitions to jumping, but when the player hits the ground, it goes back to standing. jumping has a transition land that always leads to standing because a state doesn't know about its previous states. Since standing sets a velocity of 0 the player stops walking, when he hits the ground. How do I prevent this?

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

  • Where to start? (3D Modeling)

    - by herfus
    I'm looking for a good resource to start learning 3d modeling. I'm looking for something that starts with the basics (e.g. terminology; what are quads, triangles etc.) before/while going into the actual modeling. Book, website, video, anything will do. I'm only concerned with the quality of the tutorials, how thorough they are. I have experience with texturing, level design and so on - but I've never created anything more than simple shapes/editing existing assets.

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • How to handle animations?

    - by Bane
    I am coding a simple 2D engine to be used with HTML5. I already have classes such as Picture, Scene, Camera and Renderer, but now I need to work on Animations. Picture is basocally a wrapper for a normal image object, with it's own draw method, but this is unrelated, I'm interested in how animation in 2D games is usually done. What I planned to do, is to have the Animation class as well act like a wrapper for a few image objects, and then have methods such as getCurrentImage, next and animate (which would use intervals to quickly change the current image). I meant to feed the animation a couple of PNG's at inicialisation. Is quickly swapping PNG images acceptable for 2D animation? Are there some standard ways of doing this, or are there flaws in my ways?

    Read the article

  • openGL managing images, VBOs and shaders

    - by roxlu
    I'm working on a game where I use shaders with vertex attributes (so not immediate mode). I'm drawing lots of images and changing the width/height of the quads I use to draw them a lot. To optimize this it's probably a good idea to have one buffer but then one needs to update the complete buffer when one image changes (or only a part of the buffer using glBufferSubData...) I was just wondering what kind of strategies you guys are using?

    Read the article

  • Different ways to pass Textures into HLSL shaders

    - by codymanix
    The GraphicsDevice class of xna 4 has the properties Textures and VertexTextures. What is the exact difference? I don't really understand what MSDN tells me about this. I usually use Effect parameters to pass textures to my HLSL shaders. What are the differences between these methods, which is faster? My Scenario: I am working on a minecraft like game, which means lots of separate DrawPrimitives calls and change current Texture often since I have lots of different block types. Since I use an Octtree to organize the world, I cannot easily sort by texture.

    Read the article

  • how to use opengl blend mode/functions to brighten/darken a texture.

    - by Jigar
    Tried this code, but the texture didnot get any lighter. try { texture = TextureLoader.getTexture("png", Game.class.getResourceAsStream("/brick.png"), true, GL_NEAREST); } catch (IOException e) { e.printStackTrace(); } GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID()); glEnable(GL_BLEND); glBlendFunc(GL_CONSTANT_ALPHA, GL_CONSTANT_ALPHA); GL14.glBlendColor(1.0f, 1.0f, 1.0f, 0.5f); glColor4f(1, 1, 1, 0.5f); GL11.glBegin(GL11.GL_QUADS); // Start Drawing Quads // Front Face GL11.glNormal3f(0.0f, 0.0f, 1.0f); // Normal Pointing Towards Viewer GL11.glTexCoord2f(0.0f, 0.0f); GL11.glVertex3f(-1.0f, -1.0f, 1.0f); // Point 1 (Front) GL11.glTexCoord2f(1.0f, 0.0f); GL11.glVertex3f(1.0f, -1.0f, 1.0f); // Point 2 (Front) GL11.glTexCoord2f(1.0f, 1.0f); GL11.glVertex3f(1.0f, 1.0f, 1.0f); // Point 3 (Front) GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex3f(-1.0f, 1.0f, 1.0f); // Point 4 (Front) glEnd();

    Read the article

  • How do they keep track of the NPCs in Left 4 Dead?

    - by f20k
    How do they keep track of the NPC zombies in Left 4 Dead? I am talking about the NPCs that just walk into walls or wander around aimlessly. Even though the players cannot see them, they are there (say inside rooms or behind doors). Let's say there's about 10 or so zombies in a hallway and inside rooms. Does the game keep all of those zombies in a list and iterate through giving them commands? Do they just spawn when the user is within a certain radius or reached a special location? Say you placed the 4 units (controlled by players) on completely different places throughout the map. Let's assume you aren't being swarmed and then you have not killed any of these aimless NPCs. Would the game be keeping track of 10 x 4 = 40 zombies in total? Or is my understanding completely off? The reason I ask is if I were to implement something similar on a mobile device, keeping track of 40 or more NPCs might not be such a great idea.

    Read the article

  • List has no value after adding values in

    - by Sigh-AniDe
    I am creating a a ghost sprite that will mimic the main sprite after 10 seconds of the game. I am storing the users movements in a List<string> and i am using a foreach loop to run the movements. The problem is when i run through the game by adding breakpoints the movements are being added to the List<string> but when the foreach runs it shows that the list has nothing in it. Why does it do that? How can i fix it? this is what i have: public List<string> ghostMovements = new List<string>(); public void UpdateGhost(float scalingFactor, int[,] map) { // At this foreach, ghostMovements has nothing in it foreach (string s in ghostMovements) { // current position of the ghost on the tiles int mapX = (int)(ghostPostition.X / scalingFactor); int mapY = (int)(ghostPostition.Y / scalingFactor); if (s == "left") { switch (ghostDirection) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; break; } } } } // The movement is captured here and added to the list public void captureMovement() { ghostMovements.Add(Program.form.direction); }

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • cocos2d/OpenGL multitexturing problem

    - by Gajoo
    I've got a simple shader to test multitextureing the problem is both samplers are using same image as their reference. the shader code is basically just this : vec4 mid = texture2D(u_texture,v_texCoord); float g = texture2D(u_guide,v_guideCoord); gl_FragColor = vec4(g , mid.g,0,1); and this is how I'm calling draw function : int last_State; glGetIntegerv(GL_ACTIVE_TEXTURE, &last_State); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, getTexture()->getName()); glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, mGuideTexture->getName()); ccGLEnableVertexAttribs( kCCVertexAttribFlag_TexCoords |kCCVertexAttribFlag_Position); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, texCoord); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_TEXTURE_2D); I've already check mGuideTexture->getName() and getTexture()->getName() are returning correct textures. but looking at the result I can tell, both samplers are reading from getTexture()->getName(). here are some screen shots showing what is happening : The image rendered Using above codes The image rendered when I change textures passed to samples I'm expecting to see green objects from the first picture with red objects hanging from the top.

    Read the article

  • Unity iOS optimization and draw calls

    - by vzm
    I am curious of what methods I should approach in optimizing my Unity project for iOS hardware. I have very little image effects running (directional light with low res shadows) and I used the combine children script from the standard assets to lessen the load on the CPU. My project currently runs with 45-57 draw calls at non-intensive segments and up to 178 at intensive segments. I heard that static batching relieves some of the stress, but the game has the environment moving around the player instead of the player moving around the environment. Is there any alternative that I may look towards to improving the draw call number?

    Read the article

  • how difficult to add vibration/feedback to a open source driving game

    - by Jonathan Day
    Hi, I'm looking to use SuperTuxKart as a basis for a PhD research project. A key requirement for the game is to provide vibration feedback through the controller (obviously dependant on the controller itself). I don't believe that the game currently includes this feature and I'm trying to get a feel for how big a challenge it would be to add. My background is as a J2EE and PHP developer/architect, so I don't know C++ as such, but am prepared to give it a crack if there are resources and guides to assist, and it's not a herculean task. Alternatively, if you know of any open source games that do include vibration feedback, please feel free to let me know! Preferably the game would be of the style that the player had to navigate a character (or character's vehicle) over a repeatable course/map. TIA, JD

    Read the article

  • Pokemon Yellow wrap transitions

    - by Alex Koukoulas
    So I've been trying to make a pretty accurate clone of the good old Pokemon Yellow for quite some time now and one puzzling but nonetheless subtle mechanic has puzzled me. As you can see in the uploaded image there is a certain colour manipulation done in two stages after entering a wrap to another game location (such as stairs or entering a building). One easy (and sloppy) way of achieving this and the one I have been using so far is to make three copies of each image rendered on the screen all of them with their colours adjusted accordingly to match each stage of the transition. Of course after a while this becomes tremendously time consuming. So my question is does anyone know any better way of achieving this colour manipulation effect using java? Thanks in advance, Alex

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • User generated content: a basic yet simple to use OR a complex yet powerful solution?

    - by ne5tebiu
    As stated above, which solution is better for a game based on user generated content? The simple solution (in-game editor) is great for gamers without experience in coding and etc. In this way every player could populate the game with content. But the content would be very limited. The complex solution would allow the content to be with almost no limitation but casual gamers probably couldn't make hardly any content at all. If both solutions are used, the quality behind the second solution would be more valuable than the first solution's quantity. However, making a powerful in-game editor could even take more time and manpower than the actual game and every gamer would have to learn how to use the new complex tool, understand it, and master it if he or she wants to make quality content.

    Read the article

  • Bullet Physics - Casting a ray straight down from a rigid body (first person camera)

    - by Hydrocity
    I've implemented a first person camera using Bullet--it's a rigid body with a capsule shape. I've only been using Bullet for a few days and physics engines are new to me. I use btRigidBody::setLinearVelocity() to move it and it collides perfectly with the world. The only problem is the Y-value moves freely, which I temporarily solved by setting the Y-value of the translation vector to zero before the body is moved. This works for all cases except when falling from a height. When the body drops off a tall object, you can still glide around since the translate vector's Y-value is being set to zero, until you stop moving and fall to the ground (the velocity is only set when moving). So to solve this I would like to try casting a ray down from the body to determine the Y-value of the world, and checking the difference between that value and the Y-value of the camera body, and disable or slow down movement if the difference is large enough. I'm a bit stuck on simply casting a ray and determining the Y-value of the world where it struck. I've implemented this callback: struct AllRayResultCallback : public btCollisionWorld::RayResultCallback{ AllRayResultCallback(const btVector3& rayFromWorld, const btVector3& rayToWorld) : m_rayFromWorld(rayFromWorld), m_rayToWorld(rayToWorld), m_closestHitFraction(1.0){} btVector3 m_rayFromWorld; btVector3 m_rayToWorld; btVector3 m_hitNormalWorld; btVector3 m_hitPointWorld; float m_closestHitFraction; virtual btScalar addSingleResult(btCollisionWorld::LocalRayResult& rayResult, bool normalInWorldSpace) { if(rayResult.m_hitFraction < m_closestHitFraction) m_closestHitFraction = rayResult.m_hitFraction; m_collisionObject = rayResult.m_collisionObject; if(normalInWorldSpace){ m_hitNormalWorld = rayResult.m_hitNormalLocal; } else{ m_hitNormalWorld = m_collisionObject->getWorldTransform().getBasis() * rayResult.m_hitNormalLocal; } m_hitPointWorld.setInterpolate3(m_rayFromWorld, m_rayToWorld, m_closestHitFraction); return 1.0f; } }; And in the movement function, I have this code: btVector3 from(pos.x, pos.y + 1000, pos.z); // pos is the camera's rigid body position btVector3 to(pos.x, 0, pos.z); // not sure if 0 is correct for Y AllRayResultCallback callback(from, to); Base::getSingletonPtr()->m_btWorld->rayTest(from, to, callback); So I have the callback.m_hitPointWorld vector, which seems to just show the position of the camera each frame. I've searched Google for examples of casting rays, as well as the Bullet documentation, and it's been hard to just find an example. An example is really all I need. Or perhaps there is some method in Bullet to keep the rigid body on the ground? I'm using Ogre3D as a rendering engine, and casting a ray down is quite straightforward with that, however I want to keep all the ray casting within Bullet for simplicity. Could anyone point me in the right direction? Thanks.

    Read the article

  • Best practice for designing a risk-style board game

    - by jyanks
    I'm just trying to figure out how to set up the code for a game like risk... I would like it to be extensible, so that I can have multiple maps (ie- World, North America, Eurasia, Africa) so hardcoding in the map doesn't seem to make a whole lot of sense I'm a bit confused on how/where items should be stored/accessed. Here are the objects I see the game theoretically using: -Countries/Territories -Cities (Can be contained within territories) -Capitols -Connections -Continents -Map -Troops At the moment, I feel like: -A map should have a list of continents and countries. The continents would be more of a 'logical' thing where the continents would just be lists of countries that are checked for bonuses at the start of turns -Countries should have a list of countries that they're connected to for the connections What I can't figure out is: Where do I store the troops? Do I have an object for every single troop or do I just store the number of troops on a country object as an integer? What about capitols and cities? Do those just have a reference to the country they reside in? Is there anything I'm not seeing here that's going to screw me over in the long run with the way that I'm thinking about things now? Any advice would be appreciated.

    Read the article

  • Rule of thumb for enemy design

    - by Terrance
    I'm at the early stages of developing a 2d side scrolling open ended platformer (think metroidvania) and am having a bit of difficulty at enemy design inspiration for something of a scifi, nature, fantasy setting that isn't overly familar or obvious. I haven't seen too many articles blogs or books that talk about the subject at great length. Is there a fair rule of thumb when coming up with enemy design with respect to keeping your player engaged?

    Read the article

  • Behavior Trees and Animations

    - by Tom
    I have started working on the AI for a game, but am confused how I should handle animations. I will be using a Behavior Tree for AI behavior and Cocos2D for my game engine. Should my "PlayAnimationWalk" just be another node in the tree? Something similar to this: [Approach Player] - Play Walk animation - Move Towards player - Stop Walk animation Or should the node just update an AnimationState in the blackboard and have some type of animation handler/component reference this for which animation should be playing? This has been driving me nuts :)

    Read the article

  • How do client-server cooperation based games like Diablo 3 work?

    - by edgar
    Diablo 3 cooperates with Blizzard servers even during single player games. In fact, Blizzard has had problems with the games "melting their servers." I would like to ask: How do the client and the server communicate? What details does the client leave to the server, and vice versa? What details are redundant - both the client and the server know - and how often do they disagree? The previous paragraph contains the important questions, but I have a few more that I must explain my motivation towards. I am interested in the programming of botting. Ethical botting - I don't plan on actually abusing the automation to run 24/7. I just find it to be a great programming challenge to glean information from a game, and then make decisions from that information. I am stuck in the starting gate. The unofficial questions from this post would be: How can I make a bot (language, tools, libraries)? Can I get information through the communication between client and server, rather than the brute force pixel detection easily used in more static games? There probably is a trust issue, and to that all I can say is that I promise not to abuse the answers. But please feel free to answer any of the questions you feel comfortable with. Thank you!

    Read the article

  • Rendering text with stb_font results in glitches

    - by Fabian Fritz
    I'm trying to render text with OpenGL and an "inline"-font taken from the stb_fonts The relevant code for initializing the font & rendering: LabelFactory::LabelFactory() { static unsigned char fontpixels [STB_SOMEFONT_BITMAP_HEIGHT][STB_SOMEFONT_BITMAP_WIDTH]; STB_SOMEFONT_CREATE(fontdata, fontpixels, STB_SOMEFONT_BITMAP_HEIGHT); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, STB_SOMEFONT_BITMAP_WIDTH, STB_SOMEFONT_BITMAP_HEIGHT, 0, GL_ALPHA, GL_UNSIGNED_BYTE, fontdata); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); } void LabelFactory::renderLabel(Label * label) { int x = label->x; int y = label->y; const char * str = label->text; glBindTexture(GL_TEXTURE_2D, texture); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_ALPHA_TEST); glColor4f(1.0f, 1.0f, 1.0f, 1.0f); glEnable(GL_TEXTURE_2D); glBegin(GL_QUADS); while (*str) { int char_codepoint = *str++; stb_fontchar *cd = &fontdata[char_codepoint - STB_FONT_arial_14_usascii_FIRST_CHAR]; glTexCoord2f(cd->s0, cd->t0); glVertex2i(x + cd->x0, y + cd->y0); glTexCoord2f(cd->s1, cd->t0); glVertex2i(x + cd->x1, y + cd->y0); glTexCoord2f(cd->s1, cd->t1); glVertex2i(x + cd->x1, y + cd->y1); glTexCoord2f(cd->s0, cd->t1); glVertex2i(x + cd->x0, y + cd->y1); x += cd->advance_int; } glEnd(); } However this results in weird glitches I guess I'm doing something wrong with the alpha blending, however I was unable to improve it by changing the parameters. The size and length of the outline of the text that should be shown seems about right (it should read "Test Test Test").

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >