Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 490/1529 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Can I use GLFW and GLEW together in the same code

    - by Brendan Webster
    I use the g++ compiler, which could be causing the main problem, but I'm using GLFW for window and input management, and I am using GLEW so that I can use OpenGL 3.x functionality. I loaded in models and then tried to make Vertex and Index buffers for the data, but it turned out that I kept getting segmentation faults in the program. I finally figured out that GLEW just wasn't working with GLFW included. Do they not work together? Also I've done the context creation through GLFW so that may be another factor in the problem.

    Read the article

  • Blending textures together, texture fade over / fade in

    - by Deukalion
    What is the best way to render a texture overlapping effect? Like in this example: I want either the grass to fade in to the snow texture, or the other way around. No rough edges. Somehow make them blend over. So the grass has a bit of snow or the snow has a bit of grass How is this possible during runtime? If that's possible. I don't render this by using the SpriteBatch, since the ground isn't rectangles (they can be moved). This is the way I render each shape (each one of those squares): // LoadTexture // Apply EffectPass device.DrawUserIndexedPrimitives<VertexPositionNormalTexture> ( PrimitiveType.TriangleList, render.Item.Points, // Array of VertexPositionNormalTexture 0, render.Item.Points.Length, render.Item.Indexes, // Array of int indexes (triangulation) 0, render.Item.Indexes.Length / 3, VertexPositionNormalTexture.VertexDeclaration );

    Read the article

  • XNA - Drawing 2D Primitives (Boxes) and Understanding Matrices in Computer Graphics

    - by MintyAnt
    I have two issues which I wish to solve by creating 2D primitives in XNA. In my game, I wish to have a "debug mode" which will draw a red box around all hitboxes in the game (Red outline, transparent inside). This would allow us to see where the hitboxes are being drawn AND still have the sprite graphics being drawn. I wish to further understand how matrices work within computer graphics. I have a basic theoretical grasp of how they work, but I really just want to apply some of my knowledge or find a good tutorial on it. To do this, I wish to draw my own 2D primitives (With Vertex3's) and apply different transormation matrices to them. I was trying to find a tutorial on drawing primitives using Direct3D, but most tutorials are only for c++, and just tell me to use XNA's Spritebatch. I wish to have more control over my program than just with Spritebatch. Any Help on using Direct3D or any other suggestions would greatly be appreciated. Thank you.

    Read the article

  • Weird y offset when using custom frag shader (Cocos2d-x)

    - by Mister Guacamole
    I'm trying to mask a sprite so I wrote a simple fragment shader that renders only the pixels that are not hidden under another texture (the mask). The problem is that it seems my texture has its y-coordinate offset after passing through the shader. This is the init method of the sprite (GroundZone) I want to mask: bool GroundZone::initWithSize(Size size) { // [...] // Setup the mask of the sprite m_mask = RenderTexture::create(textureWidth, textureHeight); m_mask->retain(); m_mask->setKeepMatrix(true); Texture2D *maskTexture = m_mask->getSprite()->getTexture(); maskTexture->setAliasTexParameters(); // Disable linear interpolation on the mask // Load the custom frag shader with a default vert shader as the sprite’s program FileUtils *fileUtils = FileUtils::getInstance(); string vertexSource = ccPositionTextureA8Color_vert; string fragmentSource = fileUtils->getStringFromFile( fileUtils->fullPathForFilename("CustomShader_AlphaMask_frag.fsh")); GLProgram *shader = new GLProgram; shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str()); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS); shader->link(); CHECK_GL_ERROR_DEBUG(); shader->updateUniforms(); CHECK_GL_ERROR_DEBUG(); int maskTexUniformLoc = shader->getUniformLocationForName("u_alphaMaskTexture"); shader->setUniformLocationWith1i(maskTexUniformLoc, 1); this->setShaderProgram(shader); shader->release(); // [...] } These are the custom drawing methods for actually drawing the mask over the sprite: You need to know that m_mask is modified externally by another class, the onDraw() method only render it. void GroundZone::draw(Renderer *renderer, const kmMat4 &transform, bool transformUpdated) { m_renderCommand.init(_globalZOrder); m_renderCommand.func = CC_CALLBACK_0(GroundZone::onDraw, this, transform, transformUpdated); renderer->addCommand(&m_renderCommand); Sprite::draw(renderer, transform, transformUpdated); } void GroundZone::onDraw(const kmMat4 &transform, bool transformUpdated) { GLProgram *shader = this->getShaderProgram(); shader->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, m_mask->getSprite()->getTexture()->getName()); glActiveTexture(GL_TEXTURE0); } Below is the method (located in another class, GroundLayer) that modify the mask by drawing a line from point start to point end. Both points are in Cocos2d coordinates (Point (0,0) is down-left). void GroundLayer::drawTunnel(Point start, Point end) { // To dig a line, we need first to get the texture of the zone we will be digging into. Then we get the // relative position of the start and end point in the zone's node space. Finally we use the custom shader to // draw a mask over the existing texture. for (auto it = _children.begin(); it != _children.end(); it++) { GroundZone *zone = static_cast<GroundZone *>(*it); Point nodeStart = zone->convertToNodeSpace(start); Point nodeEnd = zone->convertToNodeSpace(end); // Now that we have our two points converted to node space, it's easy to draw a mask that contains a line // going from the start point to the end point and that is then applied over the current texture. Size groundZoneSize = zone->getContentSize(); RenderTexture *rt = zone->getMask(); rt->begin(); { // Draw a line going from start and going to end in the texture, the line will act as a mask over the // existing texture DrawNode *line = DrawNode::create(); line->retain(); line->drawSegment(nodeStart, nodeEnd, 20, Color4F::RED); line->visit(); } rt->end(); } } Finally, here's the custom shader I wrote. #ifdef GL_ES precision mediump float; #endif varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_alphaMaskTexture; void main() { float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a; float texAlpha = texture2D(u_texture, v_texCoord).a; float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is invisible vec3 texColor = texture2D(u_texture, v_texCoord).rgb; gl_FragColor = vec4(texColor, blendAlpha); return; } I got a problem with the y coordinates. Indeed, it seems that once it has passed through my custom shader, the sprite's texture is not at the right place: Without custom shader (the sprite is the brown thing): With custom shader: What's going on here? Thanks :) EDIT It looks like after passing through the shader when I set the position of the sprite I set it in points, with (0,0) being in the top-right. Indeed, when I do sprite->setPosition(320, 480), the sprite is perfectly placed at the top of the screen.

    Read the article

  • How can I support objects larger than a single tile in a 2D tile engine?

    - by Yheeky
    I´m currently working on a 2D Engine containing an isometric tile map. It´s running quite well but I'm not sure if I´ve chosen the best approach for that kind of engine. To give you an idea what I´m thinking about right now, let's have a look at a basic object for a tile map and its objects: public class TileMap { public List<MapRow> Rows = new List<MapRow>(); public int MapWidth = 50; public int MapHeight = 50; } public class MapRow { public List<MapCell> Columns = new List<MapCell>(); } public class MapCell { public int TileID { get; set; } } Having those objects it's just possible to assign a tile to a single MapCell. What I want my engine to support is like having groups of MapCells since I would like to add objects to my tile map (e.g. a house with a size of 2x2 tiles). How should I do that? Should I edit my MapCell object that it may has a reference to other related tiles and how can I find an object while clicking on single MapCells? Or should I do another approach using a global container with all objects in it?

    Read the article

  • Tile-based 2D collision detection problems

    - by Vee
    I'm trying to follow this tutorial http://www.tonypa.pri.ee/tbw/tut05.html to implement real-time collisions in a tile-based world. I find the center coordinates of my entities thanks to these properties: public float CenterX { get { return X + Width / 2f; } set { X = value - Width / 2f; } } public float CenterY { get { return Y + Height / 2f; } set { Y = value - Height / 2f; } } Then in my update method, in the player class, which is called every frame, I have this code: public override void Update() { base.Update(); int downY = (int)Math.Floor((CenterY + Height / 2f - 1) / 16f); int upY = (int)Math.Floor((CenterY - Height / 2f) / 16f); int leftX = (int)Math.Floor((CenterX + Speed * NextX - Width / 2f) / 16f); int rightX = (int)Math.Floor((CenterX + Speed * NextX + Width / 2f - 1) / 16f); bool upleft = Game.CurrentMap[leftX, upY] != 1; bool downleft = Game.CurrentMap[leftX, downY] != 1; bool upright = Game.CurrentMap[rightX, upY] != 1; bool downright = Game.CurrentMap[rightX, downY] != 1; if(NextX == 1) { if (upright && downright) CenterX += Speed; else CenterX = (Game.GetCellX(CenterX) + 1)*16 - Width / 2f; } } downY, upY, leftX and rightX should respectively find the lowest Y position, the highest Y position, the leftmost X position and the rightmost X position. I add + Speed * NextX because in the tutorial the getMyCorners function is called with these parameters: getMyCorners (ob.x+ob.speed*dirx, ob.y, ob); The GetCellX and GetCellY methods: public int GetCellX(float mX) { return (int)Math.Floor(mX / SGame.Camera.TileSize); } public int GetCellY(float mY) { return (int)Math.Floor(mY / SGame.Camera.TileSize); } The problem is that the player "flickers" while hitting a wall, and the corner detection doesn't even work correctly since it can overlap walls that only hit one of the corners. I do not understand what is wrong. In the tutorial the ob.x and ob.y fields should be the same as my CenterX and CenterY properties, and the ob.width and ob.height should be the same as Width / 2f and Height / 2f. However it still doesn't work. Thanks for your help.

    Read the article

  • ContentManager in XNA cant find any XML

    - by user36385
    Im making a game in XNA 4 and this is the first time I'm using the Content loader to initialize a simple class with a XML file, but no matter how many guide I follow, or how simple or complicated is my XML File the ContentManager cant find the file; the Debug keep telling me: "A first chance exception of type 'Microsoft.Xna.Framework.Content.ContentLoadException' occurred in Microsoft.Xna.Framework.dll". I'm really confuse because I can load SpriteFonts and Texture2D without a problem ... I create the following XML (the most basic Xna XML): <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="System.String">Hello</Asset> </XnaContent> and I try to load it in the LoadContent method in my main class like this: System.String hello = Content.Load<System.String>("NewXmlFile"); There is something I'm doing wrong? I really appreciate your help

    Read the article

  • Extract derived 3D scaling from a 3D Sprite to set to a 2D billboard

    - by Bill Kotsias
    I am trying to get the derived position and scaling of a 3D Sprite and set them to a 2D Sprite. I have managed to do the first part like this: var p:Point = sprite3d.local3DToGlobal(new Vector3D(0,0,0)); billboard.x = p.x; billboard.y = p.y; But I can't get the scaling part correctly. I am trying this: var mat:Matrix3D = sprite3d.transform.getRelativeMatrix3D(stage); // get derived matrix(?) var scaleV:Vector3D = mat.decompose()[2]; // get scaling vector from derived matrix var scale:Number = scaleV.length; billboard.scaleX = scale; billboard.scaleY = scale; ...but the result is apparently wrong. PS. One might ask what I am trying to achieve. I am trying to create "billboard" 3D sprites, i.e. sprites which are affected by all 3D transformations except rotations, thus they always face the "camera".

    Read the article

  • How much it will cost to create tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How much it will cost to create tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • OpenGL camera moves faster than player

    - by opiop65
    I have a side scroller game made in OpenGL, and I'm trying to center the player in the viewport when he moves. I know how to do it: cameraX = Width / 2 / TileSize - playerPosX cameraY = Height / 2 / TileSize - playerPosY However, I have a problem. The player and "camera" move, but the player moves faster than the "camera" scrolls. So, the player can actually move out of the screen. Some code, this is how I translate the camera: public Camera(){ } public void update(Player p){ glTranslatef(-p.getPos().x - Main.WIDTH / 64 / 2, -p.getPos().y - Main.HEIGHT / 64 / 2, 1); } Here's how I move the player: public void update(){ if(Keyboard.isKeyDown(Keyboard.KEY_D)){ this.move(MOVESPEED, 0); } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ this.move(-MOVESPEED, 0); } } The move method: public void move(float x, float y){ this.getPos().set(this.getPos().x + x, this.getPos().y + y); } And then after I move the player, I update the player's geometry, which shouldn't matter. What am I doing wrong here, this seems like such a simple problem, yet it doesn't work!

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Decal implementation

    - by dreta
    I had issues finding information about decals, so maybe this question will help others. The implementation is for a forward renderer. Could somebody confirm if i got decal implementation right? You define a cube of any dimension that'll define the projection volume in common space. You check for triangle intersection with the defined cube to recieve triangles that the projection will affect. You clip these triangles and save them. You then use matrix tricks to calculate UV coordinates for the saved triangles that'll reference the texture you're projecting. To do this you take the vectors representing height, width and depth of the cube in common space, so that f.e. the bottom left corner is the origin. You put that in a matrix as the i, j, k unit vectors, set the translation for the cube, then you inverse this matrix. You multiply the vertices of the saved triangles by this matrix, that way you get their coordinates inside of a 0 to 1 size cube that you use as the UV coordinates. This way you have the original triangles you're projecting onto and you have UV coordinates for them (the UV coordinates are referencing the texture you're projecting). Then you rerender the saved triangles onto the scene and they overwrite the area of projection with the projected image. Now the questions that i couldn't find answers for. Is the last point right? I've never done software clipping, but it seems error prone enough, due to limited precision, that the'll be some z fighting occuring for the projected texture. Also is the way of getting UV coordinates correct?

    Read the article

  • Google play game services and Facebook integration in one game

    - by Ineentho
    We are creating a cross platform game for iOS and Android. We have thought about how and with which services we should integrate achievements and scoreboards with. For the iOS part, we are pretty sure that this how we want to do, in order from when the user opens the app for the first time: Connect with Game Center (Should be automatic, the user shouldn't even notice?) We will also get the players nickname for public scoreboards here. Ask if the user wants to connect with Facebook so that we can compare the players highscores with their friends. We could add Google play game services there as well, but I don't feel like that adds anything to the experience for the end user. Now comes the tricky part: Android We thought that we could do just like for iOS, except that we replace Game Center with Google Play Game Services. However, unlike Game Center, Game Services will ask the user to log in to their Google+ account and allow us to access their account. So now, what we have is a double login, first with Google+ and then with Facebook. What will users think about that? Should we scrap Play Services entirely and just ask the user for a nickname within our app and user Facebook for achievements?

    Read the article

  • Normal maps red in OpenGL?

    - by KaiserJohaan
    I am using Assimp to import 3d models, and FreeImage to parse textures. The problem I am having is that the normal maps are actually red rather than blue when I try to render them as normal diffuse textures. http://i42.tinypic.com/289ing3.png When I open the images in a image-viewing program they do indeed show up as blue. Heres when I create the texture; OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } Here is my fragment shader. You can see I just commented out the normal-map parsing and treated the normal map texture as the diffuse texture to display it and illustrate the problem. As for the rest of the code it interacts as expected with the diffuse textures so I dont see a obvious problem there. "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Why is this? does normal-map textures need some sort of special treatment in opengl?

    Read the article

  • Surface of Revolution with 3D surface

    - by user5584
    I have to use this function to get a Surface of Revolution (homework). newVertex = (oldVertex.y, someFunc1(oldVertex.x, oldVertex.y), someFunc2(oldVertex.x, oldVertex.y)); As far as I know (FIXME) Surface of Revolution means rotations of a (2D)curve around an axis in 3D. But this vertex computing gives a 3D plane (FIXME again :D), so rotation of this isn't obvious. Am I misunderstanding something?

    Read the article

  • Testing on Device Other Than the Known Brand Question (Local and Imported Phone Question)

    - by David Dimalanta
    I have a question. When testing a device by using Eclipse, it's easy to install and add device software with these specific brands commonly used in game testing like Samsung, Google, T-Mobile, and HTC; according to the Android Developers website. What if I'm using other brands that runs on Android to test the program via Eclipse (i.e. MyPhone, Starmobile), what should I look for to download in order to enable testing phones that those brands are using other than the brands that are known and commonly used: model number or simply brand? Here's some examples of these brands other than the brands we've known that runs on Android: Starmobile Engage 7 (http://www.lazada.com.ph/Starmobile-Engage-7-Android-40-4GB-with-Wi-Fi-Black-Starmobile-Mercury-B201-COMBO-39833.html/) My|Phone A898 Duo (http://www.myphone.com.ph/#!a898-duo/c1yt) Also, take note that I'm a Filipino programmer working at the Philippines to test our local smartphones for the created Android game or app. Hope you can understand me for my help.

    Read the article

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • Clipping polygons in XNA with stencil (not using spritebatch)

    - by Blau
    The problem... i'm drawing polygons, in this case boxes, and i want clip children polygons with its parent's client area. // Class Region public void Render(GraphicsDevice Device, Camera Camera) { int StencilLevel = 0; Device.Clear( ClearOptions.Stencil, Vector4.Zero, 0, StencilLevel ); Render( Device, Camera, StencilLevel ); } private void Render(GraphicsDevice Device, Camera Camera, int StencilLevel) { Device.SamplerStates[0] = this.SamplerState; Device.Textures[0] = this.Texture; Device.RasterizerState = RasterizerState.CullNone; Device.BlendState = BlendState.AlphaBlend; Device.DepthStencilState = DepthStencilState.Default; Effect.Prepare(this, Camera ); Device.DepthStencilState = GlobalContext.GraphicsStates.IncMask; Device.ReferenceStencil = StencilLevel; foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } foreach ( Region child in ChildrenRegions ) { child.Render( Device, Camera, StencilLevel + 1 ); } Effect.Prepare( this, Camera ); // This does not works Device.BlendState = GlobalContext.GraphicsStates.NoWriteColor; Device.DepthStencilState = GlobalContext.GraphicsStates.DecMask; Device.ReferenceStencil = StencilLevel; // This should be +1, but in that case the last drrawed is blue and overlap all foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } } public static class GraphicsStates { public static BlendState NoWriteColor = new BlendState( ) { ColorSourceBlend = Blend.One, AlphaSourceBlend = Blend.One, ColorDestinationBlend = Blend.InverseSourceAlpha, AlphaDestinationBlend = Blend.InverseSourceAlpha, ColorWriteChannels1 = ColorWriteChannels.None }; public static DepthStencilState IncMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.IncrementSaturation, }; public static DepthStencilState DecMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.DecrementSaturation, }; } How can achieve this? EDIT: I've just relized that the NoWriteColors.ColorWriteChannels1 should be NoWriteColors.ColorWriteChannels. :) Now it's clipping right. Any other approach?

    Read the article

  • Jump and run HTML5 Game Framework

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: we have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practive? Having multiple canvas to draw? Each gameloop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it everytime and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive... Other question: Do you have any advice how we could dive into implementing an own framework? Most stuff we find online relies on existing frameworks or they just implement their game without building a framework.

    Read the article

  • Converting from different handedness coordinate systems

    - by SirYakalot
    I am currently porting a demo from XNA to DirectX which, as I understand it, both have coordinate systems with different handednesses. What are the things I need to bare in mind when converting between the two? I understand not everything needs to be changed. Also I notice that many of the 3D maths functions in some of the direct3D libraries have right handed and left handed alternatives. Would it be better to just use these?

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • What are some valuable conferences for game developers?

    - by Tommy
    Lately I was thinking of visiting a Game Developer Conference and so I search the Internet, but I didn't find a thorough list of available Conferences. Now I know some of them, like the GDC in San Francisco but I was wondering, what other Game Developer Conferences are out there. So my question is: What Game Dev Conferences do you know, that are valuable for Game Developers and Game Designers? Have you visited one of these Conferences yourself? Is there a skill level needed to appreciate such a Conference? I am aware, that there is no "true" answer to this question, but I think, that an overview over existing Conferences could be usefull for all levels of game developers.

    Read the article

  • Taking fixed direction on hemisphere and project to normal (openGL)

    - by Maik Xhani
    I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions: vec3 sampleDirections[6] = {vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, -0.700629f), vec3(-0.509037f, 0.5f, -0.700629), vec3(-0.823639f, 0.5f, 0.267617f)}; now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta = acos(sqrt(1.0f-rand1)); float phi = 6.283185f * rand2; vec3 s = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)); vec3 v = normalize(cross(n,vec3(0.0072, 1.0, 0.0034))); vec3 u = cross(v, n); u = s.x*u; v = s.y*v; vec3 w = s.z*n; vec3 direction = u+v+w; return normalize(direction); } ** EDIT ** This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) { vec3 x; vec3 z; if(abs(n.x) < abs(n.y)){ if(abs(n.x) < abs(n.z)){ x = vec3(1.0f,0.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } }else{ if(abs(n.y) < abs(n.z)){ x = vec3(0.0f,1.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } } z = normalize(cross(x,n)); x = cross(n,z); mat3 M = mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z); return M*sampleDir; } So if my n = (0,0,1); and my sampleDir = (0,1,0); shouldn't the M*sampleDir be (0,0,1)? Cause that is what I was expecting.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >