Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 383/1016 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Ragdoll continuous movement

    - by Siddharth
    I have created a ragdoll for my game but the problem I found was that the ragdoll joints are not perfectly implemented so they are continuously moving. Ragdoll does not stand at fix place. I here paste my work for that and suggest some guidance about that so that it can stand on fix place. chest = new Chest(pX, pY, gameObject.getmChestTextureRegion(), gameObject); head = new Head(pX, pY - 16, gameObject.getmHeadTextureRegion(), gameObject); leftHand = new Hand(pX - 6, pY + 6, gameObject.getmHandTextureRegion() .clone(), gameObject); rightHand = new Hand(pX + 12, pY + 6, gameObject .getmHandTextureRegion().clone(), gameObject); rightHand.setFlippedHorizontal(true); leftLeg = new Leg(pX, pY + 18, gameObject.getmLegTextureRegion() .clone(), gameObject); rightLeg = new Leg(pX + 7, pY + 18, gameObject.getmLegTextureRegion() .clone(), gameObject); rightLeg.setFlippedHorizontal(true); gameObject.getmScene().registerTouchArea(chest); gameObject.getmScene().attachChild(chest); gameObject.getmScene().registerTouchArea(head); gameObject.getmScene().attachChild(head); gameObject.getmScene().registerTouchArea(leftHand); gameObject.getmScene().attachChild(leftHand); gameObject.getmScene().registerTouchArea(rightHand); gameObject.getmScene().attachChild(rightHand); gameObject.getmScene().registerTouchArea(leftLeg); gameObject.getmScene().attachChild(leftLeg); gameObject.getmScene().registerTouchArea(rightLeg); gameObject.getmScene().attachChild(rightLeg); // head revolute joint revoluteJointDef = new RevoluteJointDef(); revoluteJointDef.enableLimit = true; revoluteJointDef.initialize(head.getHeadBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(0f, -0.5f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); headRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // // left leg revolute joint revoluteJointDef.initialize(leftLeg.getLegBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(-0.15f, 0.75f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); leftLegRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // right leg revolute joint revoluteJointDef.initialize(rightLeg.getLegBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(0.15f, 0.75f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); rightLegRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // left hand revolute joint revoluteJointDef.initialize(leftHand.getHandBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(-0.25f, 0.1f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); leftHandRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // right hand revolute joint revoluteJointDef.initialize(rightHand.getHandBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(0.25f, 0.1f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); rightHandRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef);

    Read the article

  • Draw a never-ending line in XNA

    - by user2236165
    I am drawing a line in XNA which I want to never end. I also have a tool that moves forward in X-direction and a camera which is centered at this tool. However, when I reach the end of the viewport the lines are not drawn anymore. Here are some pictures to illustrate my problem: At the start the line goes across the whole screen, but as my tool moves forward, we reach the end of the line. Here are the method which draws the lines: private void DrawEvenlySpacedSprites (Texture2D texture, Vector2 point1, Vector2 point2, float increment) { var distance = Vector2.Distance (point1, point2); // the distance between two points var iterations = (int)(distance / increment); // how many sprites with be drawn var normalizedIncrement = 1.0f / iterations; // the Lerp method needs values between 0.0 and 1.0 var amount = 0.0f; if (iterations == 0) iterations = 1; for (int i = 0; i < iterations; i++) { var drawPoint = Vector2.Lerp (point1, point2, amount); spriteBatch.Draw (texture, drawPoint, Color.White); amount += normalizedIncrement; } } Here are the draw method in Game. The dots are my lines: protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.Black); nyVector = nextVector (gammelVector); GraphicsDevice.SetRenderTarget (renderTarget); spriteBatch.Begin (); DrawEvenlySpacedSprites (dot, gammelVector, nyVector, 0.9F); spriteBatch.End (); GraphicsDevice.SetRenderTarget (null); spriteBatch.Begin (SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, null, camera.transform); spriteBatch.Draw (renderTarget, new Vector2 (), Color.White); spriteBatch.Draw (tool, new Vector2(toolPos.X - (tool.Width/2), toolPos.Y - (tool.Height/2)), Color.White); spriteBatch.End (); gammelVector = new Vector2 (nyVector.X, nyVector.Y); base.Draw (gameTime); } Here's the next vector-method, It just finds me a new point where the line should be drawn with a new X-coordinate between 100 and 200 pixels and a random Y-coordinate between the old vector Y-coordinate and the height of the viewport: Vector2 nextVector (Vector2 vector) { return new Vector2 (vector.X + r.Next(100, 200), r.Next ((int)(vector.Y - 100), viewport.Height)); } Can anyone point me in the right direction here? I'm guessing it has to do with the viewport.width, but I'm not quite sure how to solve it. Thank you for reading!

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

  • Rendering performance in FlasCC + UDK when compared to Stage3d and UDK on Windows?

    - by Arthur Wulf White
    Adobe recently released the Flash C++ Compiler, which UDK uses to target Flash Player. Developers can now access UDK for browser applications. Does this mean greater performance than using a Stage3D engine (Away3D 4) and how much of a noticeable difference in performance would it make in rendering speeds? Is there any benchmark you could propose that would allow to compare them fairly? I am asking this to help myself understand the consequences in performance for deciding to use UDK in a browser based game. I would also like to know how it compares with UDK running natively in Windows? I am not asking which technology to use or which is better. Only interested in optimizing rendering speed in a 3d browser game with flash.

    Read the article

  • Should I use procedural animation?

    - by user712092
    I have started to make a fantasy 3d fps swordplay game and I want to add animations. I don't want to animate everything by hand because it would take a lot of time, so I decided to use procedural animation. I would certainly use IK (starting with simple reaching an object with hand ...). I also assume procedural generation of animations will make less animations to do by hand (I can blend animations ...). I want also to have a planner for animation which would simplify complex animations; those which can be split to a sequence - run and then jump, jump and then roll - or which are separable - legs running and torso swinging with sword -. I want for example a character to chop a head of a big troll. If troll crouches character would just chop his head off, if it is standing he would climb on a troll. I know that I would have to describe the state ("troll is low", "troll is high", "chop troll head" ..) which would imply what regions animation will be in (if there is a gap between them character would jump), which would imply what places character can have some of legs and hands or would choose an predefined animation. My main goal is simplicity of coding, but I want my game to be looking cool also. Is it worthy to use procedural animation or does it make more troubles that it solves? (there can be lot of twiddling ...) I am using Blender Game Engine (therefore Python for scripting, and Bullet Physics).

    Read the article

  • How to run the pixel shader effcet??

    - by Yashwinder
    Below stated is the code for my pixel shader which I am rendering after the vertex shader. I have set the wordViewProjection matrix in my program but I don't know to set the progress variable i.e in my pixel shader file which will make the image displayed by the help of a quad to give out transition effect. Here is the code for my pixel shader program::: As my pixel shader is giving a static effect and now I want to use it to give some effect. So for this I have to add a progress variable in my pixel shader and initialize to the Constant table function i.e constantTable.SetValue(D3DDevice,"progress",progress ); I am having the problem in using this function for progress in my program. Anybody know how to set this variable in my program. And my new pixel Shader code is float progress : register(C0); sampler2D implicitInput : register(s0); sampler2D oldInput : register(s1); struct VS_OUTPUT { float4 Position : POSITION; float4 Color : COLOR0; float2 UV : TEXCOORD 0; }; float4 Blinds(float2 uv) { if(frac(uv.y * 5) < progress) { return tex2D(implicitInput, uv); } else { return tex2D(oldInput, uv); } } // Pixel Shader { return Blinds(input.UV); }

    Read the article

  • LWJGL Determining whether or not a polygon is on-screen.

    - by Brandon oubiub
    Not sure whether this is an LWJGL or math question. I want to check whether a shape is on-screen, so that I don't have to render it if it isn't. First of all, is there any simple way to do this that I am overlooking? Like some method or something that I haven't found? I'm going to assume there isn't. I tried using my trigonometry skills, but it is hard to do this because of how glRotate also distorts the image a little for perspective and realism. Or, is there any way to easily determine if a ray starting from the camera, and going outward in a straight line intersects a shape? (I can probably do it with my math skillz, but is there an easier way?) By the way, I can easily determine the angle at which the camera is facing around the x and y axis. EDIT: Or, possibly, I could get the angles of a vector from the camera to the object, and compare those angles to my camera angles. But I have a feeling that the distorts from glRotate and glTranslate would be an issue. I'll try it though.

    Read the article

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • Importing Models from Maya to OpenGL

    - by Mert Toka
    I am looking for ways to import models to my game project. I am using Maya as modelling software, and GLUT for windowing of my game. I found this great parser, it imports all the textures and normal vectors, but it is compatible with .obj files of 3dsMAX. I tried to use it with Maya obj's, and it turned out that Maya's obj files are a bit different from former one, thus it cannot parse them. If you know any way to convert Maya obj files to 3dsMax obj files, that would be acceptable as well as a new parser for Maya obj files.

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • SpriteBatch.end() generating null pointer exception

    - by odaymichael
    I am getting a null pointer exception using libGDX that the debugger points as the SpriteBatch.end() line. I was wondering what would cause this. Here is the offending code block, specifically the batch.end() line: batch.begin(); for (int j = 0; j < 3; j++) for (int i = 0; i < 3; i++) if (zoomgrid[i][j].getPiece().getImage() != null) zoomgrid[i][j].getPiece().getImage().draw(batch); batch.end(); The top of the stack is actually a line that calls lastTexture.bind(); In the flush() method of com.badlogic.gdx.graphics.g2d.SpriteBatch. I appreciate any input, let me know if I haven't included enough information.

    Read the article

  • Game window systems and internal frames

    - by 2080
    I don't know if this is a valid question, but: What kind of window manager do games use which have internal frames (Frames inside frames)? Does this differ between the programming languages (Are e.g. in Java the AWT/Swing libraries used to manage these and other graphical elements, such as buttons,or is this to restrictive (speed, graphical possibilities?)) A special example would be EVE Online, where the client can use the ingame windows like on a normal desktop.

    Read the article

  • FreeType2 Crash on FT_Init_FreeType

    - by JoeyDewd
    I'm currently trying to learn how to use the FreeType2 library for drawing fonts with OpenGL. However, when I start the program it immediately crashes with the following error: "(Can't correctly start the application (0xc000007b))" Commenting the FT_Init_FreeType removes the error and my game starts just fine. I'm wondering if it's my code or has something to do with loading the dll file. My code: #include "SpaceGame.h" #include <ft2build.h> #include FT_FREETYPE_H //Freetype test FT_Library library; Game::Game(int Width, int Height) { //Freetype FT_Error error = FT_Init_FreeType(&library); if(error) { cout << "Error occured during FT initialisation" << endl; } And my current use of the FreeType2 files. Inside my bin folder (where debug .exe is located) is: freetype6.dll, libfreetype.dll.a, libfreetype-6.dll. In Code::Blocks, I've linked to the lib and include folder of the FreeType 2.3.5.1 version. And included a compiler flag: -lfreetype My program starts perfectly fine if I comment out the FT_Init function which means the includes, and library files should be fine. I can't find a solution to my problem and google isn't helping me so any help would be greatly appreciated.

    Read the article

  • car crash android game

    - by Axarydax
    I'd like to make a simple 2d car crashing game, where the player would drive his car into moving traffic and try to cause as much damage as possible in each level (some Burnout games had a mode like this). The physics part of the game is the most important, I can worry about graphics later. Would engine like emini or box2d work for this kind of game? Would Android devices have enough power to handle this? For example if there were about 20 cars colliding, along with some buildings, it would be nice if I could get 20 fps.

    Read the article

  • What is involved for a simple UDP game?

    - by acidzombie24
    I once tried to write a simple game with UDP in a week as a throwaway test. It went horribly. I threw it away early. The main problem i had was restoring the game state of all players/enemies/objects to an old state and fast forward the game to the point of time the player is playing (ie half a second before a jump. A little early or late can make the player miss the jump) Maybe this method is not the easiest way? i suspect it to be but i designed it wrong from the beginning and realized at the end of 2nd day. (so i didnt learn too much or wasted that much time) For myself and others, What is involved for a simple UDP game and how do i write one? Or how do i solve the prediction problem restoring to state properly. I'll mark this as CW bc i know there will be lots of helpful answers.

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • How do multi-platform games usually store save data?

    - by PixelPerfect3
    I realize this is a bit of a broad question, but I was wondering if there is a "standard" in the industry when it comes to storing save data for games (and is it different across platforms - Xbox/PS/PC/Mac/Android/iOS?) For example for a game like Assassin's Creed or The Walking Dead: They are on multiple platforms and they usually have to save enough information about the player and their actions. Do they use something like XML files, databases, or just straight binary dumps? How much does it differ from platform to platform? I would appreciate it if someone with experience in the game industry would answer this.

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • Game Center: Leaderboard score inconsistencies

    - by Hasyimi Bahrudin
    Background I'm currently developing a simple library that mirrors Game Center's functionalities locally. Basically, this library is a system that manages achievements and leaderboards, and optionally sync it with the Game Center. So, if the game is not GC enabled, the game will still have achievements and leaderboards (stored inside a plist). But of course, the leaderboards will then only contain the local player's scores (which is kind of useless, I know :P). Problem Currently I have coded both of the achievements and leaderboards subsystems. The achievements subsystem have already been tested and it works. I'm currently testing the leaderboards subsystem using multiple test user accounts. I loaded the test app on a device and on the simulator, both logged in with 2 different user accounts. Then I performed these steps: I first used the device to upload a score. Then, I ran the simulator, and the score submitted by the user on the device is shown. Which is cool. Then, I used the simulator to upload a score. But on the device, still, only one score is listed. I checked on the Game Center app (to see if the bug lies within my code), and I got the same thing. Under "All players", there is only one score on the device, but there are 2 scores on the simulator. I wanted to make sure that the simulator is not causing this, so I swapped the users on the device and the simulator, and the result is still the same. In other words, the first user is oblivious of the second user's score, but the second user can see the first user's score. Then I tried with a third user. The result: the third user can only see the scores of the first user and himself. The second user still sees the scores of the first user and himself. The first user only sees his own score. Now here comes the weird part. I then make the first user and the second user befriend each other. The result: under "Friends", the first user can see the second user's score, but under "All Players", the first user's score is the only one listed. Screenshots The first user sees this: The second user sees this: So, is this a normal thing when using sandboxed GC accounts? Is this behavior documented somewhere by Apple?

    Read the article

  • Adobe Air turn based multiplayer Game, sockets vs http bandwidth

    - by Arin Aivazian
    I am developing an Adobe Air multiplayer game for iPad. It is turn based and not realtime. It is like checkers game. I want to use a client server model. I have found 2 options to connect to server so far: socket connection and http requests My question is: Is the bandwidth requirement for socket connection vs http requests different? I need the game to work with very low speed internet connections

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >