Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 493/1292 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • Keep 3d model facing the camera at all angles

    - by Sparky41
    I'm trying to keep a 3d plane facing the camera at all angles but while i have some success with this: Vector3 gunToCam = cam.cameraPosition - getWorld.Translation; Vector3 beamRight = Vector3.Cross(torpDirection, gunToCam); beamRight.Normalize(); Vector3 beamUp = Vector3.Cross(beamRight, torpDirection); shipeffect.beamWorld = Matrix.Identity; shipeffect.beamWorld.Forward = (torpDirection) * 1f; shipeffect.beamWorld.Right = beamRight; shipeffect.beamWorld.Up = beamUp; shipeffect.beamWorld.Translation = shipeffect.beamPosition; *Note: Logic not wrote by me i just found this rather useful It seems to only face the camera at certain angles. For example if i place the camera behind the plane you can see it that only Roll's around the axis like this: http://i.imgur.com/FOKLx.png (imagine if you are looking from behind where you have fired from. Any idea what to what the problem is (angles are not my specialty) shipeffect is an object that holds this class variables: public Texture2D altBeam; public Model beam; public Matrix beamWorld; public Matrix[] gunTransforms; public Vector3 beamPosition;

    Read the article

  • Collision detection in 3D space

    - by dreta
    I've got to write, what can be summed up as, a compelte 3D game from scratch this semester. Up untill now i have only programmed 2D games in my spare time, the transition doesn't seem tough, the game's simple. The only issue i have is collision detection. The only thing i could find was AABB, bounding spheres or recommendations of various physics engines. I have to program a submarine that's going to be moving freely inside of a cave system, AFAIK i can't use physics libraries, so none of the above solves my problem. Up untill now i was using SAT for my collision detection. Are there any similar, great algorithms, but crafted for 3D collision? I'm not talking about octrees, or other optimalizations, i'm talking about direct collision detection of one set of 3D polygons with annother set of 3D polygons. I thought about using SAT twice, project the mesh from the top and the side, but then it seems so hard to even divide 3D space into convex shapes. Also that seems like far too much computation even with octrees. How do proffessionals do it? Could somebody shed some light.

    Read the article

  • How to control a spaceship near a planet in Unity3D?

    - by tyjkenn
    Right now I have spaceship orbiting a small planet. I'm trying to make an effective control system for that spaceship, but it always end up spinning out of control. After spinning the ship to change direction, the thrusters thrust the wrong way. Normal airplane controls don't work, since the ship is able to leave the atmosphere and go to other planets, in the journey going "upside-down". Could someone please enlighten me on how to get thrusters to work the way they are supposed to?

    Read the article

  • I need to move an entity to the mouse location after i rightclick

    - by I.Hristov
    Well I've read the related questions-answers but still cant find a way to move my champion to the mouse position after a right-button mouse-click. I use this code at the top: float speed = (float)1/3; And this is in my void Update: //check if right mouse button is clicked if (mouse.RightButton == ButtonState.Released && previousButtonState == ButtonState.Pressed) { // gets the position of the mouse in mousePosition mousePosition = new Vector2(mouse.X, mouse.Y); //gets the current position of champion (the drawRectangle) currentChampionPosition = new Vector2(drawRectangle.X, drawRectangle.Y); // move champion to mouse position: //handles the case when the mouse position is really close to current position if (Math.Abs(currentChampionPosition.X - mousePosition.X) <= speed && Math.Abs(currentChampionPosition.Y - mousePosition.Y) <= speed) { drawRectangle.X = (int)mousePosition.X; drawRectangle.Y = (int)mousePosition.Y; } else if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)((mousePosition.X - currentChampionPosition.X) * speed); drawRectangle.Y += (int)((mousePosition.Y - currentChampionPosition.Y) * speed); } } previousButtonState = mouse.RightButton; What that code does at the moment is on a click it brings the sprite 1/3 of the distance to the mouse but only once. How do I make it move consistently all the time? It seems I am not updating the sprite at all. EDIT I added the Vector2 as Nick said and with speed changed to 50 it should be OK. I tried it with if ButtonState.Pressed and it works while pressing the button. Thanks. However I wanted it to start moving when single mouse clicked. It should be moving until reaches the mousePosition. The Edit of Nick's post says to create another Vector2, But I already have the one called mousePosition. Not sure how to use another one. //gets a Vector2 direction to move *by Nick Wilson Vector2 direction = mousePosition - currentChampionPosition; //make the direction vector a unit vector direction.Normalize(); //multiply with speed (number of pixels) direction *= speed; // move champion to mouse position if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)(direction.X); drawRectangle.Y += (int)(direction.Y); } } previousButtonState = mouse.RightButton;

    Read the article

  • Repeat a part of spritesheet as background

    - by Moiblpadde
    So I'm trying to repeat a part of my spritesheet as a background (js, canvas). My code so far: var canvas = $("#board")[0], ctx = canvas.getContext("2d"), sprite = new Image(); sprite.src = "spritesheet.png"; sprite.onload = function(){ ctx.fillStyle = ctx.createPattern(spriteBg, "repeat"); ctx.fillRect(0, 25, 500, 500); } This is fine, but as you can see, it repeat the whole sprite, not just a part of it, and I just can't figure out how to do it D:

    Read the article

  • Question about separating game core engine from game graphics engine...

    - by Conrad Clark
    Suppose I have a SquareObject class, which implements IDrawable, an interface which contains the method void Draw(). I want to separate drawing logic itself from the game core engine. My main idea is to create a static class which is responsible to dispatch actions to the graphic engine. public static class DrawDispatcher<T> { private static Action<T> DrawAction = new Action<T>((ObjectToDraw)=>{}); public static void SetDrawAction(Action<T> action) { DrawAction = action; } public static void Dispatch(this T Obj) { DrawAction(Obj); } } public static class Extensions { public static void DispatchDraw<T>(this object Obj) { DrawDispatcher<T>.DispatchDraw((T)Obj); } } Then, on the core side: public class SquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } And on the graphics side: public static class SquareRender{ //stuff here public static void Initialize(){ DrawDispatcher<SquareObject>.SetDrawAction((Square)=>{//my square rendering logic}); } } Do this "pattern" follow best practices? And a plus, I could easily change the render scheme of each object by changing the DispatchDraw parameter, as in: public class SuperSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } public class RedSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<RedSquareObject>(); } #endregion } RedSquareObject would have its own render method, but SuperSquareObject would render as a normal SquareObject I'm just asking because i do not want to reinvent the wheel, and there may be a design pattern similar (and better) to this that I may be not acknowledged of. Thanks in advance!

    Read the article

  • Using copyrighted sprites

    - by Zertalx
    I was thinking about making a pacman clone, I know there is a similar question here Using Copyrighted Images , but I know i can't use the original art from the game because it belongs to Namco, so if I design a character that has the shape of the slice circle it will look exactly like pacman, maybe if I use green instead of yellow? Also if the game plays like the original pacman, it is wrong? I just want to make the game as a personal project and and publish it in my site without getting in trouble

    Read the article

  • Unity 5.1 audio issues (no sound in back channels)

    - by N0xus
    I've trying to bring in surround sound audio into my project. I've set my computer up to run in 5.1 and when I play a 6 channel audio through windows media player (it's a test audio that does left speaker, right speaker etc) it works fine. However, when I run it through Unity, all I get is the front 3 channels. I've set it in the Edit - project settings - audio to be 5.1 in there. I even set it in code with following: void Start() { AudioSettings.speakerMode = AudioSpeakerMode.Mode5point1; } How ever, when I run a debug line of: print ( AudioSettings.driverCaps); It tells me that Unity is only playing in stereo. Is there something I'm still not doing? I should also add I've ran 10 different tests using the 3D audio pan and spread options. I've set both to either being fully off, half way on and full. Still the same results.

    Read the article

  • How to give a ball a following texture trailing effect

    - by Evan Kohilas
    How do I draw copies of the leading texture so that there is a line of the leading ball following behind it? (that don't collide) So far I have tried to create the effect by placing another graphic 2 pixels off the graphic, but I don't see the second ball being drawn. spriteBatch.Draw(ballTexture, ballPos, null, Color.White, 0.0f, new Vector2(Ballpos.X +2, ballPos.Y +2), ballSize, SpriteEffects.None, 0); Thanks.

    Read the article

  • the unity aspect ratio script looks good in computer but not in android phones

    - by Pooya Fayyaz
    I'm developing a game for android devices.and i have a script that solve the ratio problem but i have a problem in this code.and i dont know why.it looks perfect in computer even resize the game screen but in mobile phones have a problem.my game runs in landscape mode.this is the script : using UnityEngine; using System.Collections; using System.Collections.Generic; public class reso : MonoBehaviour { void Update() { // set the desired aspect ratio (the values in this example are // hard-coded for 16:9, but you could make them into public // variables instead so you can set them at design time) float targetaspect = 16.0f / 9.0f; // determine the game window's current aspect ratio float windowaspect = (float)Screen.width / (float)Screen.height; // current viewport height should be scaled by this amount float scaleheight = windowaspect / targetaspect; // obtain camera component so we can modify its viewport Camera camera = GetComponent<Camera>(); // if scaled height is less than current height, add letterbox if (scaleheight < 1.0f && Screen.width <= 490 ) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } else // add pillarbox { float scalewidth = 1.0f / scaleheight; Rect rect = camera.rect; rect.width = scalewidth; rect.height = 1.0f; rect.x = (1.0f - scalewidth) / 2.0f; rect.y = 0; camera.rect = rect; } } } i figure that my problem occur in this part of the script: if (scaleheight < 1.0f) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } and its look like this in my mobile phone in portrait: and in landscape mode:

    Read the article

  • Slick2d Spritesheet showing whole image

    - by BotskoNet
    I'm trying to show a single subimage from a sprite sheet. Using slick2d SpriteSheet class, all it's doing is showing me the entire image, but scaled down to fit the cell dimensions. The image is 96x192 and should have cells of 32x32. The code: SpriteSheet spriteSheet = new SpriteSheet("images/"+file, 32, 32 ); System.out.println("Horiz Count: " + spriteSheet.getHorizontalCount()); System.out.println("Vert Count: " + spriteSheet.getVerticalCount()); System.out.println("Height: " + spriteSheet.getHeight()); System.out.println("Width: " + spriteSheet.getWidth()); System.out.println("Texture Width: " + spriteSheet.getTextureWidth()); System.out.println("Texture Height: " + spriteSheet.getTextureHeight()); Prints: Horiz Count: 3 Vert Count: 6 Height: 192 Width: 96 Texture Width: 0.75 Texture Height: 0.75 Not sure what the texture dimensions refer to, but the rest is entirely accurate. However, when I draw the icon, the entire sprite image shows scaled down to 32x32: Image image = spriteSheet.getSprite(1, 0); // a test image.bind(); GL11.glEnable(GL11.GL_BLEND); GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(x,y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(x+image.getWidth(),y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(x+image.getWidth(),y+image.getHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(x,y+image.getHeight()); GL11.glEnd(); GL11.glDisable(GL11.GL_BLEND);

    Read the article

  • DX9 Deferred Rendering, GBuffer displays as clear color only

    - by Fire31
    I'm trying to implement Catalin Zima's Deferred Renderer in a very lightweight c++ DirectX 9 app (only renders a skydome and a model), at this moment I'm trying to render the gbuffer, but I'm having a problem, the screen shows only the clear color, no matter how much I move the camera around. However, removing all the render target operations lets the app render the scene normally, even if the models are being applied the renderGBuffer effect. Any ideas of what I'm doing wrong?

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Mac mini 2012 graphic upgrade for UE4 Unity3D Blender

    - by DaCrAn
    I have a mac mini (late 2012) i7, 16gb ram Vengeance graphic card intel HD4000. I buy recently a thunderbolt expansion PCIE whit support a graphic card PCIE 2.0 16x whit space for Full leght card. I have dubts about what graphic card gona give me the best results for using the Unreal Engine 4 UE4 or Unity3D, and Blender. My badget cover a Nvidia Quadro K4000 3gb or ATI Firepro W7000 4gb. Any recomendation? What professional graphic card can be better for design games in 3D? Thanks. DaCrAn

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • Typical collision detection

    - by marcg11
    I would like to know how is the typical collision detection of most games. For example, you control a character which can move in 2 dimensional directions (except up and down). Now lets asume he walks into a wall, most of the games depending on character angle and the BB normal face will only stop the player in one axis, but will continue moving in the other along the wall axis. How is that done? I've only managed to stop the character from going through the wall by seting the position to the last one in the past frame if the new position colllisions the bounding box. But this just makes the player stop sharply and unrealisticly.

    Read the article

  • Having to check collisions twice per game tic

    - by user22241
    I have vertically moving elevators (3 solid tiles wide) and static solid tiles. Each are separate entities and therefore have their own respective collision routines (to check for, and resolve, collisions with the main character) I check my vertical collisions after characters vertical movements and then horizontal collisions after horizontal movements. The problem is that I want my platform to kill the player if it squashes him from the top, and also if he's on a moving platform (that is moving up) that squashes him into a solid block. Correct behaviour, player on solid blocks being squashed from above by decending elevator Here is what happens. Gravity pushes character into solid block, solid block collision routine corrects characters position and sits him on the solid block which pushes him into the moving elevator, elevator routine then checks for collision and kills player. This assumes I am checking solid blocks first, then elevator collisions. However, if it's the other way around, this happens.... Incorrect behaviour, player on accending elevator gets pushed into solid blocks above Player is on an elevator moving up, gravity pushes him into the elevator, solid block CD routine detects no collision, no action taken. Elevator CD routine detects character has been pushed into elevator by gravity, corrects this by moving character up and sitting him on the elevator and pushes him into the solid blocks above, however the solid block vertical routine has now already run for this tic, so the game continues and the next solid block collision that is encountered is the horizontal routine. This detects a collision and moves the character out of the collision to the left or right of the block which looks odd to say the least (character should get killed here). The only way I've managed to get this working correctly is by running the solid block CD, then the elevator CD, then the solid block CD again straight after. This is clearly wasteful but I can't figure out how else to do this. Any help would be appreciated.

    Read the article

  • Permanent death in a MUD (think command line MMORPG)

    - by Luke Laupheimer
    I have considered writing a MUD for years, and I have a lot of ideas my friends think are really cool (and that's how I'd hope to get anywhere -- word of mouth). Thing is, there's one thing I have always wanted, that my friends and strangers hated: permanent death. Now, the emotional response I get to this is visceral revulsion, every time. I'm pretty sure I am the only person that wants this, or if I'm not, I'm a tiny minority. Now, the reason I want it is because I want the actions of the players to matter. Unlike a lot of other MUDs, which have a set of static city-states and social institutions etc, I want the things my players do, should I get any, to actually change the situation. And that includes killing people. If you kill someone, you didn't send them to time out, you killed them. What happens when you kill people? They go away. They don't come back in half an hour to smack talk you some more. They're gone. Forever. By making death non-permanent, you make death not matter. It would be similar if a climax to a character's arc is getting a speeding ticket. It cheapens it. Non-permanent death cheapens death. How can I: 1) Convince my players (and random people!) that this is actually a good idea?, or 2) Find some other way to make death and violence matter as much as it does in real life (except within the game, of course) sans character deletion? What alternatives are there out there?

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • How do you properly organize a commercial game?

    - by Reactorcore
    For the past months I've been studying programming and I've finally learned how to code, but one thing that is confusing me is how to properly organize the design of a game project - code wise. The game I'm building is a pretty standard commercial game. It has the basic components of a normal game: A world, characters and items interacting with each other and all of this is run by game manager. Basically you play as a hero in a world and do stuff. Fight, explore and interact. Think of your standard adventure game that starts off with an intro, goes to the menu system, then gets into the game and back to the menu. Pretty much like 99% of any commercial game or otherwise serious game projects. Thats what I'm aiming at. The problem is: How do you properly code a commercial game architecture? How do you organize it? How do you make it not become unmaintainable spaghetti code? What specific things to keep in mind when building this, codewise? How you can help me: a) Please tell how do you code your own game projects. What is your thought-process when designing the architecture? b) Recommend books, blogs, tutorials, videos or anything else on how to organize a commercial video game. c) Give hints and tips on do's/don'ts when building a game, codewise. Please help!

    Read the article

  • Actionscript 3.0 - Enemies do not move right in my platformer game

    - by Christian Basar
    I am making a side-scrolling platformer game in Flash (Actionscript 3.0). I have made lots of progress lately, but I have come across a new problem. I will give some background first. My game level's terrain (or 'floor') is referenced by a MovieClip variable called 'floor.' My desire is to have the Player and enemy characters walk along the terrain. I have gotten the Player character to move on the terrain just fine; he walks up/down hills and falls whenever there is no ground beneath him. Here is the code I created to allow the Player to follow the terrain correctly. Much more code is used to control the Player, but only this code deals with the Player character's following of the terrain and gravity. // If the Player's not on the ground (not touching the 'floor' MovieClip)... if (!onGround) { // Disable ducking downKeyPressed = false; // Increase the Player's 'y' position by his 'y' velocity player.y += playerYVel; } // Increase the 'playerYVel' variable so that the Player will fall // progressively faster down the screen. This code technically // runs "all the time" but in reality it only affects the player // when he's off the ground. playerYVel += gravity; // Give the Player a terminal velocity of 15 px/frame if (playerYVel > 15) { playerYVel = 15; } // If the Player has not hit the 'floor,' increase his falling //speed if (! floor.hitTestPoint(player.x, player.y, true)) { player.y += playerYVel; // The Player is not on the ground when he's not touching it onGround = false; } Since getting this code to work for the Player, I have created a 'SkullDemon' class, which is one of the planned enemies for my game. I want the 'SkullDemon' objects to move along the terrain like the Player does. With lots of great help, I have already coded the EventListeners, etc. necessary for the 'SkullDemons' to move. Unfortunately, I am having trouble getting them to move along the terrain. In fact, they do not touch the terrain at all; they move along the top of the boundary of the 'floor' MovieClip! I had a simple text diagram showing what I mean, but unfortunately Stackoverflow does not format it correctly. I hope my problem is clear from my description. Strangely enough, my code for the Player's movement and the 'SkullDemon's' movement is almost exactly the same, yet the 'SkullDemons' do not move like the Player does. Here is my code for the SkullDemon movement: // Move all of the Skull Demons using this method protected function moveSkullDemons():void { // Go through the whole 'skullDemonContainer' for (var skullDi:int = 0; skullDi < skullDemonContainer.numChildren; skullDi++) { // Set the SkullDemon 'instance' variable to equal the current SkullDemon skullDIns = SkullDemon(skullDemonContainer.getChildAt(skullDi)); // For now, just move the Skull Demons left at 5 units per second skullDIns.x -= 5; // If the Skull Demon has not hit the 'floor,' increase his falling //speed if (! floor.hitTestPoint(skullDIns.x, skullDIns.y, true)) { // Increase the Skull Demon's 'y' position by his 'y' velocity skullDIns.y += skullDIns.sdYVel; // The Skull Demon is not on the ground when he's not touching it skullDIns.sdOnGround = false; } // Increase the 'sdYVel' variable so that the Skull Demon will fall // progressively faster down the screen. This code technically // runs "all the time" but in reality it only affects the Skull Demon // when he's off the ground. if (! skullDIns.sdOnGround) { skullDIns.sdYVel += skullDIns.sdGravity; // Give the Skull Demon a terminal velocity of 15 px/frame if (skullDIns.sdYVel > 15) { skullDIns.sdYVel = 15; } } // What happens when the Skull Demon lands on the ground after a fall? // The Skull Demon is only on the ground ('onGround == true') when // the ground is touching the Skull Demon MovieClip's origin point, // which is at the Skull Demon's bottom centre for (var i:int = 0; i < 10; i++) { // The Skull Demon is only on the ground ('onGround == true') when // the ground is touching the Skull Demon MovieClip's origin point, // which is at the Skull Demon's bottom centre if (floor.hitTestPoint(skullDIns.x, skullDIns.y, true)) { skullDIns.y = skullDIns.y; // Set the Skull Demon's y-axis speed to 0 skullDIns.sdYVel = 0; // The Skull Demon is on the ground again skullDIns.sdOnGround = true; } } } } // End of 'moveSkullDemons()' function It is almost like the 'SkullDemons' are interacting with the 'floor' MovieClip using the hitTestObject() function, and not the hitTestPoint() function which is what I want, and which works for the Player character. I am confused about this problem and would appreciate any help you could give me. Thanks!

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >