Search Results

Search found 26093 results on 1044 pages for 'career development'.

Page 430/1044 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • How can I attach a model to the bone of another model?

    - by kaykayman
    I am trying to attach one animated model to one of the bones of another animated model in an XNA game. I've found a few questions/forum posts/articles online which explain how to attach a weapon model to the bone of another model (which is analogous to what I'm trying to achieve), but they don't seem to work for me. So as an example: I want to attach Model A to a specific bone in Model B. Question 1. As I understand it, I need to calculate the transforms which are applied to the bone on Model B and apply these same transforms to every bone in Model A. Is this right? Question 2. This is my code for calculating the Transforms on a specific bone. private Matrix GetTransformPaths(ModelBone bone) { Matrix result = Matrix.Identity; while (bone != null) { result = result * bone.Transform; bone = bone.Parent; } return result; } The maths of Matrices is almost entirely lost on me, but my understanding is that the above will work its way up the bone structure to the root bone and my end result will be the transform of the original bone relative to the model. Is this right? Question 3. Assuming that this is correct I then expect that I should either apply this to each bone in Model A, or in my Draw() method: private void DrawModel(SceneModel model, GameTime gametime) { foreach (var component in model.Components) { Matrix[] transforms = new Matrix[component.Model.Bones.Count]; component.Model.CopyAbsoluteBoneTransformsTo(transforms); Matrix parenttransform = Matrix.Identity; if (!string.IsNullOrEmpty(component.ParentBone)) parenttransform = GetTransformPaths(model.GetBone(component.ParentBone)); component.Player.Update(gametime.ElapsedGameTime, true, Matrix.Identity); Matrix[] bones = component.Player.GetSkinTransforms(); foreach (SkinnedEffect effect in mesh.Effects) { effect.SetBoneTransforms(bones); effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(model.Angle)) * Matrix.CreateTranslation(model.Position) * parenttransform; effect.View = getView(); effect.Projection = getProjection(); effect.Alpha = model.Opacity; } } mesh.Draw(); } I feel as though I have tried every conceivable way of incorporating the parenttransform value into the draw method. The above is my most recent attempt. Is what I'm trying to do correct? And if so, is there a reason it doesn't work? The above Draw method seems to transpose the models x/z position - but even at these wrong positions, they do not account for the animation of Model B at all. Note: As will be evident from the code my "model" is comprised of a list of "components". It is these "components" that correspond to a single "Microsoft.Xna.Framework.Graphics.Model"

    Read the article

  • Tool to convert Textures to power of two?

    - by 3nixios
    I'm currently porting a game to a new platform, the problem being that the old platform accepted non power of two textures and this new platform doesn't. To add to the headache, the new platform has much less memory so we want to use the tools provided by the vendor to compress them; which of course only takes power of two textures. The current workflow is to convert the non power of tho textures to dds with 'texconv', then use the vendors compression tools in a batch. So, does anyone know of a tool to convert textures to their nearest 'power of two' counterparts? Thanks

    Read the article

  • What is the easiest and shortest way to draw a 2d line in c/c++?

    - by Mike
    I am fairly new to c/c++ but I do have experiance with directx and opengl with java and c#. My goal is to create a 2d game in c with under 2 pages of code. Most of what I have seen requires 3 pages of code to just get a window running. I would like to know the shortest code to get a window running where I can draw lines. I believe this can be done in less lines with opengl versus directx. Is there maybe an api or framework i can use to shorten it more? Also, it would be nice if the solution were cross platform compatible.

    Read the article

  • Making a Background Scrolling in Stacking Game

    - by David Dimalanta
    Hmmm...Is it a good idea to use a LibGDX parallax background for making a stacking game (i.e. PAPA STACKer Lite)? For example, I'm starting to use the blocks to drag-n-drop it. Next, when the next piece reaches the top of the screen, it automatically scrolls to the next one where the available space left. Aside from that, is it also involved with the camera code (Orthographic Camera) that the screen size appeared like 720x1280 but actually it's 1440x2560 for example? And another thing, does the background scrolling have the option to scroll from start to finish and infinite?

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Detect collision from a particular side

    - by Fabián
    I'm making a platform sidescrolling game. All I want to do is to detect if my character is on the floor: function OnCollisionStay (col : Collision){ if(col.gameObject.tag == "Floor"){ onFloor = true; } else {onFloor = false;} } function OnCollisionExit (col : Collision){ onFloor = false; } But I know this isn't the accurate way. If I hit a cube with a "floor" tag, in the air (no matter if with the character's feet or head) I would be able to jump. Is there a way to use the same box collision to detect if I'm touching something from a specific side?

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • How can I make the camera return to the beginning of the terrain when it reaches the end?

    - by wbaccari
    How can I make the camera return to the beginning of the terrain when it reaches the end? I tried using the ICameraSceneNode*-setPosition(). if (camera->getPosition().X>1200.f) camera->setPosition(vector3df(1.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().X<0.f) camera->setPosition(vector3df(1199.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().Z>1200.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1.f)); if (camera->getPosition().Z<0.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1199.f)); It seems to work fine with a flat terrain (one shade of grey in heightmap) but it starts to produce a strange behavior as soon as i try to add some hills. Edit: The setPosition() call seems to perform a translation of the camera toward the new position, therefore the camera stops at the first obstacle it encounters on its way.

    Read the article

  • Which Game Engine to Use for an Angry Bird style game? [JAVA] [on hold]

    - by Arch1tect
    Our team is building an Angry Bird Style game, and we have only about ten days. The game is a little more complex than Angry Bird because there are two players, they each have a castle with pigs to protect(not destroy:)). And the goal is to destroy the other player's pigs. I wonder what Game Engine would help us finish this game most efficiently. We at least need a physics engine but I guess game engine is more helpful since it usually includes physics engine. Correct me if I'm wrong. (So I'm wondering which game engine I should use, if it's just physics engine, I'll use box2d) Networking may or may not be added later depend on time we have. Thanks in advance for any advice! EDIT: image looks small, I'll add one:

    Read the article

  • What is the proper way to maintain the angle of a gun mounted on a car?

    - by Blair
    So I am making a simple game. I want to put a gun on top of a car. I want to be able to control the angle of the gun. Basically it can go forward all the way so that it is parallel to the ground facing the direction the car is moving or it can point behind the car and any of the angles in between these positions. I have something like the following right now but its not really working. Is there an better way to do this that I am not seeing? #This will place the car glPushMatrix() glTranslatef(self.position.x,1.5,self.position.z) glRotated(self.rotation, 0.0, 1.0, 0.0) glScaled(0.5, 0.5, 0.5) glCallList(self.model.gl_list) glPopMatrix() #This will place the gun on top glPushMatrix() glTranslatef(self.position.x,2.5,self.position.z) glRotated(self.tube_angle, self.direction.z, 0.0, self.direction.x) print self.direction.z glRotated(45, self.position.z, 0.0, self.position.x) glScaled(1.0, 0.5, 1.0) glCallList(self.tube.gl_list) glPopMatrix() This almost works. It moves the gun up and down. But when the car moves around, the angle of the gun changes. Not what I want.

    Read the article

  • Updating games for iOS 6 and new iPhone/iPod Touch

    - by SundayMonday
    Say I have a game that runs full-screen on iPhone 4S and older devices. The balance of the game is just right for the 480 x 320 screen and associated aspect ratio. Now I want to update my game to run full-screen on the new iPhone/iPod Touch where the aspect ratio of the screen is different. It seems like this can be challenging for some games in terms of maintaining the "balance". For example if the extra screen space was just tacked onto the right side of Jet Pack Joyride the balance would be thrown off since the user now has more time to see and react to obstacles. Also it could be challenging in terms of code maintenance. Perhaps Jet Pack Joyride would slightly increase the speed of approaching obstacles when the game is played on newer devices. However this quickly becomes messy when extra conditional statements are added all over the code. One solution is to have some parameters that are set in once place at start-up depending on the device type. What are some strategies for updating iOS games to run on the new iPhone and iPod Touch?

    Read the article

  • How can player actions be "judged morally" in a measurable way?

    - by Sebastien Diot
    While measuring the player "skills" and "effort" is usually easy, adding some "less objective" statistics can give the player supplementary goals, especially in a MUD/RPG context. What I mean is that apart from counting how many orcs were killed, and gems collected, it would be interesting to have something along the line of the traditional Good/Evil, Lawful/Chaotic ranking of paper-based RPG, to add "dimension" to the game. But computers cannot differentiate good/evil effectively (nor can humans in many cases), and if you have a set of "laws" which are precise enough that you can tell exactly when the player breaks them, then it generally makes more sense to actually prevent them from doing that action in the first place. One example could be the creation/destruction axis (if players are at all allowed to create/build things), possibly in the form of the general effect of the player actions on "ecology". So what else is there left that can be effectively measured and would provide a sense of "moral" for the player? The more axis I have to measure, the more goals the player can have, and therefore the longer the game can last. This also gives the players more ways of "differentiating" themselves among hordes of other players of the same "class" and similar "kit".

    Read the article

  • Server for online browser game

    - by Tim Rogers
    I am going to be making an online single player browser game. The online element is needed so that a player can login and store the state of their game. This will include things like what buildings have been made and where they have been positioned as well as the users personal statistics and achievements. At this point in time, I am expecting all of the game logic to be performed client side So far, I am thinking I will use flash for creating the client side of the game. I am also creating a MySQL database to store all the users information. My question is how do I connect the two. Presumably I will need some sort of server application which will listen for incoming requests from any clients, perform the SQL query and then return the data. Does anyone have any recommendations of what technology/language to use?

    Read the article

  • Fitting a rectangle into screen with XNA

    - by alecnash
    I am drawing a rectangle with primitives in XNA. The width is: width = GraphicsDevice.Viewport.Width and the height is height = GraphicsDevice.Viewport.Height I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far. This is what I am using to get the camera distance: //Height of piramid float alpha = 0; float beta = 0; float gamma = 0; alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2)); beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2); gamma = (float)Math.Sqrt(beta*beta - alpha*alpha); position = new Vector3(0, 0, gamma); Any idea where to put the camera on the Z-axis?

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • 2D animations frames vs 3D animation for small indie project: timing considerations

    - by mm24
    pretty lame question but was wondering.. I am developing a 2D game using Cocos2D for iOS. The art work till now is all 2D (is a shooter game) but some of the characters would benefit of complex animations (eg. 20 frames). I feel a bit stupid because I came across only now that there is the chance to do 3D to 2D frames exporting and then to use them in Cocos2D. The thing that put me off on 3D gaming at first was that it takes more than one person in a team to do so properly (Illustrator, 3D modeller, 3D animator and programmer). Now I feel a bit stupid because having a 3D model I could do and modify the poses whenever I wanted (I should ask to the 3D animator which I guess would be time expensive). Instead now is me and two illustrators (as I require many frames per character). Is my impression that it would have been much longer right or not? Are there any other project management considerations that can be done on this? Sorry if for some this might be trivial but is my first "indie game developer experience".

    Read the article

  • Rotate a vector by given degrees (errors when value over 90)

    - by Ivan
    I created a function to rotate a vector by a given number of degrees. It seems to work fine when given values in the range -90 to +90. Beyond this, the amount of rotation decreases, i.e., I think objects are rotating the same amount for 80 and 100 degrees. I think this diagram might be a clue to my problem, but I don't quite understand what it's showing. Must I use a different trig function depending on the radians value? The programming examples I've been able to find look similar to mine (not varying the trig functions). Vector2D.prototype.rotate = function(angleDegrees) { var radians = angleDegrees * (Math.PI / 180); var ca = Math.cos(radians); var sa = Math.sin(radians); var rx = this.x*ca - this.y*sa; var ry = this.x*sa + this.y*ca; this.x = rx; this.y = ry; };

    Read the article

  • Questions about game states

    - by MrPlow
    I'm trying to make a framework for a game I've wanted to do for quite a while. The first thing that I decided to implement was a state system for game states. When my "original" idea of having a doubly linked list of game states failed I found This blog and liked the idea of a stack based game state manager. However there were a few things I found weird: Instead of RAII two class methods are used to initialize and destroy the state Every game state class is a singleton(and singletons are bad aren't they?) Every GameState object is static So I took the idea and altered a few things and got this: GameState.h class GameState { private: bool m_paused; protected: StateManager& m_manager; public: GameState(StateManager& manager) : m_manager(manager), m_paused(false){} virtual ~GameState() {} virtual void update() = 0; virtual void draw() = 0; virtual void handleEvents() = 0; void pause() { m_paused = true; } void resume() { m_paused = false; } void changeState(std::unique_ptr<GameState> state) { m_manager.changeState(std::move(state)); } }; StateManager.h class GameState; class StateManager { private: std::vector< std::unique_ptr<GameState> > m_gameStates; public: StateManager(); void changeState(std::unique_ptr<GameState> state); void StateManager::pushState(std::unique_ptr<GameState> state); void popState(); void update(); void draw(); void handleEvents(); }; StateManager.cpp StateManager::StateManager() {} void StateManager::changeState( std::unique_ptr<GameState> state ) { if(!m_gameStates.empty()) { m_gameStates.pop_back(); } m_gameStates.push_back( std::move(state) ); } void StateManager::pushState(std::unique_ptr<GameState> state) { if(!m_gameStates.empty()) { m_gameStates.back()->pause(); } m_gameStates.push_back( std::move(state) ); } void StateManager::popState() { if(!m_gameStates.empty()) m_gameStates.pop_back(); } void StateManager::update() { if(!m_gameStates.empty()) m_gameStates.back()->update(); } void StateManager::draw() { if(!m_gameStates.empty()) m_gameStates.back()->draw(); } void StateManager::handleEvents() { if(!m_gameStates.empty()) m_gameStates.back()->handleEvents(); } And it's used like this: main.cpp StateManager states; states.changeState( std::unique_ptr<GameState>(new GameStateIntro(states)) ); while(gamewindow::gameWindow.isOpen()) { states.handleEvents(); states.update(); states.draw(); } Constructors/Destructors are used to create/destroy states instead of specialized class methods, state objects are no longer static but

    Read the article

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • What's the difference between Canvas and WebGL?

    - by gadr90
    I'm thinking about using CAAT as a part of a HTML5 game engine. One of it's features is the ability to render to Canvas and WebGL without changing anything in the client code. That is a good thing, but I haven't found precisely: what are the differences between those two technologies? I would specially like to know the differences of Canvas and WebGL in the following regards: Framerate Desktop browser support Mobile browser support Futureproofability (TM)

    Read the article

  • Android Java rectangle collision detection not working

    - by Charlton Santana
    I had been hard coding a collision detection system which was buggy. Then I came across using rectangles for collsion detection. So I put it all in and it does not work, I put a log in and it never logged. Note to Java programmers who are not Android programers: Android uses the word Rect instead of Rectangle. Code for Block.java: public Rect getBounds(){ return new Rect (this.x, this.y, 10, 20); } Code for Sprite.java: public Rect getBounds(){ return new Rect (this.x, this.y, 20, 20); } Code for MainGame.java: for(Block block : BLOCKS) { block.draw(canvas); block.rigidbody(); Rect spriter = sprite.getBounds(); Rect blockr = block.getBounds(); if(spriter.intersect(blockr)){ showgameover = 1; Log.d(TAG, "Game Over"); } } Is anyone able to help?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >