Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 598/1529 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • Physics Loop in a NodeJS/Socket.IO Environment

    - by Thomas Mosey
    I'm developing a 2D HTML5 Canvas Game, and I am trying to think of the most efficient way to implement a Physics Loop on the server-end of things, running NodeJS and Socket.IO. The only method I've thought of is using setTimeout/Interval, is there any better way? Any examples would be appreciated. EDIT: The Game is a top-down Game, like Zelda and older Pokemon Games. Most of the physics done in the loop will be simple intersects.

    Read the article

  • Calculating a child object's Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor, but have encountered the following problem: I have two objects, A and B. A's initial values: Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) B's initial values: Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) If I now make B a child of A, I need to re-calculate the B's Position, Rotation and Scale relative to A such that it maintains its current position, rotation and scale in world coordinates. So B's position would now be (-2, -2, -2) since now A is its center and (-2, -2, -2) will keep B in the same position. I think I got the Position and scale figured out, but not rotation. So I opened Unity and ran the same example and I noticed that when making a child object, the child object did not move at all. but had its Position, Rotation and Scale values changed relative to the parent. For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When B becomes a child of A, it's rotation values become: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same rotation values into the B object in my editor, the object does not move at all. How did Unity arrive at those rotation values for the child? What are the calculations? If you can put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but the Rotation is what I really need.

    Read the article

  • Detect if square in grid is within a diamond shape

    - by myrkos
    So I have a game in which basically everything is a square inside a big grid. It's easy to check if a square is inside a box whose center is another square: *** x *o* --> x is not in o's square *** **x *o* --> x IS in o's square *** This can be done by simply subtracting the coordinates of o and x, then taking the largest coordinate of that and comparing it with the half side length. Now I want to do the same thing but check if x is in o's diamond, like so: * **x **o** --> x IS in o's diamond *** * What would be the best way to check if a square is in another square's surrounding diamond-shaped area, given the diamond's half width/height?

    Read the article

  • How can I plot a radius of all reachable points with pathfinding for a Mob?

    - by PugWrath
    I am designing a tactical turn based game. The maps are 2d, but do have varying level-layers and blocking objects/terrain. I'm looking for an algorithm for pathfinding which will allow me to show an opaque shape representing all of the possible max-distance pixels that a mob can move to, knowing the mob's max pixel distance. Any thoughts on this, or do I just need to write a good pathfinding algorithm and use it to find the cutoff points for any direction in which an obstacle exists?

    Read the article

  • Avoid double compression of resources

    - by user1095108
    I am using .pngs for my textures and am using a virtual file system in a .zip file for my game project. This means my textures are compressed and decompressed twice. What are the solutions to this double compression problem? One solution I've heard about is to use .tgas for textures, but it seems ages ago, since I've heard that. Another solution is to implement decompression on the GPU and, since that is fast, forget about the overhead.

    Read the article

  • Change players state and controls in-game

    - by Samurai Fox
    I'm using Unity 3D Let's say the player is an ice cube. You control it like a normal player. On press of a button, ice transforms (with animation) into water. You control it completely different than the ice cube. Another great example would be: Player is human being and has normal FPS controls. On press of a button human transforms into birds and now has completely different controls. Now, my question is, what would be easier and better: make one object with animation transition and to stay in that state of anim. until button is pressed again make two object: ice and water. Ice has an animation of turning into water. So replace ice (with animation) with water object And if anyone knows this one too: how to switch between 2 different types of player controls.

    Read the article

  • Why the clip space in OpenGL has 4 dimensions?

    - by user827992
    I will use this as a generic reference, but the more i browser online docs and books, the less i understand about this. const float vertexPositions[] = { 0.75f, 0.75f, 0.0f, 1.0f, 0.75f, -0.75f, 0.0f, 1.0f, -0.75f, -0.75f, 0.0f, 1.0f, }; in this online book there is an example about how to draw the first and classic hello world for OpenGL about making a triangle. The vertex structure for the triangle is declared as stated in the code above. The book, as all the other sources about this, stress the point that the Clip Space is a 4D structure that is used to basically decide what will be rasterized and rendered to the screen. Here I have my questions: i can't imagine something in 4D, i don't think that a human can do that, what is a 4D for this Clip space ? the most human-readable doc that i have read speaks about a camera, which is just an abstraction over the clipping concept, and i get that, the problem is, why not using the concept of a camera in the first place which is a more familiar 3D structure? The only problem with the concept of a camera is that you need to define the prospective in other way and so you basically have to add another statement about what kind of camera you wish to have. How i'm supposed to read this 0.75f, 0.75f, 0.0f, 1.0f ? All i get is that they are all float values and i get the meaning of the first 3 values, what does it mean the last one?

    Read the article

  • How should I sort images in an isometric game so that they appear in the correct order?

    - by Andrew
    Hi! This seems like a rather simple problem but I am having a lot of difficulty with it. What should I do to properly sort images in an isometric game? In a normal 2d top-down game one could use the screen y axis to sort the images. In this example the trees are properly sorted but the isometric walls are not. Example image: sorted by screen y Wall2 is one pixel below wall1 therefore it is drawn after wall1. If I sort by the isometric y axis the walls appear in the correct order but the trees do not. Example image: sorted by isometric y

    Read the article

  • Build a view frustum from angles

    - by MulletDevil
    I have 4 angles, left, right, top & bottom. These angles are in degrees. They define the angle between the forward vector and the corresponding side. I am trying to use these to calculate the required values for Perseective Off Centre function found here http://docs.unity3d.com/Documentation/ScriptReference/Camera-projectionMatrix.html I tried doing (near plane-far plane) * Tan(angle) But that didn't give the correct results.

    Read the article

  • What cars on roads game engines are there?

    - by David Thielen
    What game engines are there that support laying out a map of roads and handle vehicle movement on the roads. Something similar to the basic functionality in Transport Tycoon/Locomotion. I don't care about looks (although prettier is better) and top down or isometric is fine. I just need a simple way to create maps and move cars on it. And preferably the cars do take time to speed up and slow down as they go from stopped to full speed. Prefer in Windows (any API in Windows). I also prefer a free engine as this is just for internal use. I have found CarDriving 2D - does anyone know if it works well?

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • Implementing `fling` logic without pan gesture recognizers

    - by KDiTraglia
    So I am trying to port over a simple game that I originally wrote to iphone into cocos2d-x. I've hit a minor bump however in implementing simple 'fling' logic I had in the iphone version that is difficult to port over to the c++. In iOS I could get the velocity of a pan gesture very easily: CGPoint velocity = [recognizer velocityInView:recognizer.view]; However now I basically only know where the touch began, where the touch ended, and all the touches that are logged in between. For now I logged all the pts onto a stack then pulled the last point and the 6th to last point (seemed to work the best), find the difference between those pts multiply by a constant and use that as the velocity. It works relatively well, but I'm wondering if anyone else has any better algorithms, when given a bunch of touch pts, to figure out a new speed upon releasing an object that feels natural (Note speed in my game is just a constant x and y, there's no drag or spin or anything tricky like that). Bonus points if anyone has figured out how to get pan gestures into the newest version (3.0 alpha) of cocos2d-x without losing ability to build cross platform.

    Read the article

  • What is the most efficient way to add and removed Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • GLSL Normals not transforming propertly

    - by instancedName
    I've been stuck on this problem for two days. I've read many articles about transforming normals, but I'm just totaly stuck. I understand choping off W component for "turning off" translation, and doing inverse/traspose transformation for non-uniform scaling problem, but my bug seems to be from a different source. So, I've imported a simple ball into OpenGL. Only transformation that I'm applying is rotation over time. But when my ball rotates, the illuminated part of the ball moves around just as it would if direction light direction was changing. I just can't figure out what is the problem. Can anyone help me with this? Here's the GLSL code: Vertex Shader: #version 440 core uniform mat4 World, View, Projection; layout(location = 0) in vec3 VertexPosition; layout(location = 1) in vec3 VertexColor; layout(location = 2) in vec3 VertexNormal; out vec4 Color; out vec3 Normal; void main() { Color = vec4(VertexColor, 1.0); vec4 n = World * vec4(VertexNormal, 0.0f); Normal = n.xyz; gl_Position = Projection * View * World * vec4(VertexPosition, 1.0); } Fragment Shader: #version 440 core uniform vec3 LightDirection = vec3(0.0, 0.0, -1.0); uniform vec3 LightColor = vec3(1f); in vec4 Color; in vec3 Normal; out vec4 FragColor; void main() { diffuse = max(0.0, dot(normalize(-LightDirection), normalize(Normal))); vec4 scatteredLight = vec4(LightColor * diffuse, 1.0f); FragColor = min(Color * scatteredLight, vec4(1.0)); }

    Read the article

  • Using two joysticks in Cocos2D

    - by Blade
    Here is what I am trying to do: With the left joystick the player can steer the figure and with the right joystick it can attack. Problem is that the left joystick seems to get all the input, the right one does not even register anything. I enabled multipletouch after the eagleView and gone thoroughly over the code. But I seem to miss something. I initiliaze both sticks and it shows me both of them in game, but like I said, only the left one works. I initialize them both. From the h. file: SneakyJoystick *joystick; SneakyJoystick *joystickRight; And in m.file I synthesize, deallocate and initialize them. In order to use one for controlling and the other for attacking I put this: -(void)updateStateWithDeltaTime:(ccTime)deltaTime andListOfGameObjects:(CCArray*)listOfGameObjects { if ((self.characterState == kStateIdle) || (self.characterState == kStateWalkingBack) || (self.characterState == kStateWalkingLeft) || (self.characterState == kStateWalkingRight)|| (self.characterState == kStateWalkingFront) || (self.characterState == kStateAttackingFront) || (self.characterState == kStateAttackingBack)|| (self.characterState == kStateAttackingRight)|| (self.characterState == kStateAttackingLeft)) { if (joystick.degrees > 60 && joystick.degrees < 120) { if (self.characterState != kStateWalkingBack) [self changeState:kStateWalkingBack]; }else if (joystick.degrees > 1 && joystick.degrees < 59) { if (self.characterState != kStateWalkingRight) [self changeState:kStateWalkingRight]; } else if (joystick.degrees > 211 && joystick.degrees < 300) { if (self.characterState != kStateWalkingFront) [self changeState:kStateWalkingFront]; } else if (joystick.degrees > 301 && joystick.degrees < 360){ if (self.characterState != kStateWalkingRight) [self changeState:kStateWalkingRight]; } else if (joystick.degrees > 121 && joystick.degrees < 210) { if (self.characterState != kStateWalkingLeft) [self changeState:kStateWalkingLeft]; } if (joystickRight.degrees > 60 && joystickRight.degrees < 120) { if (self.characterState != kStateAttackingBack) [self changeState:kStateAttackingBack]; }else if (joystickRight.degrees > 1 && joystickRight.degrees < 59) { if (self.characterState != kStateAttackingRight) [self changeState:kStateAttackingRight]; } else if (joystickRight.degrees > 211 && joystickRight.degrees < 300) { if (self.characterState != kStateAttackingFront) [self changeState:kStateAttackingFront]; } else if (joystickRight.degrees > 301 && joystickRight.degrees < 360){ if (self.characterState != kStateAttackingRight) [self changeState:kStateAttackingRight]; } else if (joystickRight.degrees > 121 && joystickRight.degrees < 210) { if (self.characterState != kStateAttackingLeft) [self changeState:kStateAttackingLeft]; } [self applyJoystick:joystick forTimeDelta:deltaTime]; [self applyJoystick:joystickRight forTimeDelta:deltaTime]; } Maybe it has something to do with putting them both to time delta? I tried working around it, but it did not work. So I am thankful for any input you guys can give me :)

    Read the article

  • Quaternion based rotation and pivot position

    - by Michael IV
    I can't figure out how to perform matrix rotation using Quaternion while taking into account pivot position in OpenGL.What I am currently getting is rotation of the object around some point in the space and not a local pivot which is what I want. Here is the code [Using Java] Quaternion rotation method: public void rotateTo3(float xr, float yr, float zr) { _rotation.x = xr; _rotation.y = yr; _rotation.z = zr; Quaternion xrotQ = Glm.angleAxis((xr), Vec3.X_AXIS); Quaternion yrotQ = Glm.angleAxis((yr), Vec3.Y_AXIS); Quaternion zrotQ = Glm.angleAxis((zr), Vec3.Z_AXIS); xrotQ = Glm.normalize(xrotQ); yrotQ = Glm.normalize(yrotQ); zrotQ = Glm.normalize(zrotQ); Quaternion acumQuat; acumQuat = Quaternion.mul(xrotQ, yrotQ); acumQuat = Quaternion.mul(acumQuat, zrotQ); Mat4 rotMat = Glm.matCast(acumQuat); _model = new Mat4(1); scaleTo(_scaleX, _scaleY, _scaleZ); _model = Glm.translate(_model, new Vec3(_pivot.x, _pivot.y, 0)); _model =rotMat.mul(_model);//_model.mul(rotMat); //rotMat.mul(_model); _model = Glm.translate(_model, new Vec3(-_pivot.x, -_pivot.y, 0)); translateTo(_x, _y, _z); notifyTranformChange(); } Model matrix scale method: public void scaleTo(float x, float y, float z) { _model.set(0, x); _model.set(5, y); _model.set(10, z); _scaleX = x; _scaleY = y; _scaleZ = z; notifyTranformChange(); } Translate method: public void translateTo(float x, float y, float z) { _x = x - _pivot.x; _y = y - _pivot.y; _z = z; _position.x = _x; _position.y = _y; _position.z = _z; _model.set(12, _x); _model.set(13, _y); _model.set(14, _z); notifyTranformChange(); } But this method in which I don't use Quaternion works fine: public void rotate(Vec3 axis, float angleDegr) { _rotation.add(axis.scale(angleDegr)); // change to GLM: Mat4 backTr = new Mat4(1.0f); backTr = Glm.translate(backTr, new Vec3(_pivot.x, _pivot.y, 0)); backTr = Glm.rotate(backTr, angleDegr, axis); backTr = Glm.translate(backTr, new Vec3(-_pivot.x, -_pivot.y, 0)); _model =_model.mul(backTr);///backTr.mul(_model); notifyTranformChange(); }

    Read the article

  • Has a multi player graphic adventure* ever been made?

    - by Petruza
    By graphic adventure, I mean point & click LucasArts-type games. Those games have a mostly linear structure in nature, and usually don't offer as many variants as other games types like action, rpg, strategy, which makes this genre difficult to implement a multi-player feature. I'd like to know if there has been any attempts on doing such a thing, and if it would be viable, as players going offline or leaving a game in the middle would affect significantly the other players' game.

    Read the article

  • Render angles of a 3D model into 2D images?

    - by Ricket
    Is there a tool out there that you can give a 3D model file, and it will output 2D renders of it from various angles? For example if you were making a 2D RPG but you want to make your character look nice, you might make the character in 3D and then just render the character from 8 or more angles into images which then are used by the 2D engine to give a pseudo-3D look. Does such a tool exist or will it need to be custom-written or done manually?

    Read the article

  • Structuring game world entities and their rendering objects

    - by keithjgrant
    I'm putting together a simple 2d tile-based game. I'm finding myself spinning circles on some design decisions, and I think I'm in danger of over-engineering. After all, the game is simple enough that I had a working prototype inside of four hours with fewer than ten classes, it just wasn't scalable or flexible enough for a polished game. My question is about how to structure flow of control between game entity objects and their rendering objects. Should each renderer have a reference to their entity or vice-versa? Or both? Should the entity be in control of calling the render() method, or be completely oblivious? I know there are several valid approaches here, but I'm kind of feeling decision paralysis. What are the pros and cons of each approach?

    Read the article

  • openGL Camera setup for Zoom in/out centered at point under cursor

    - by user3228921
    I am trying to implement a zoom in/out navigation mode in a openGL 3dViewer. I was able to implement zoom functionality centered at screen center just by moving eye towards the center in perspective mode. Now i am trying to do the zoom centered at arbitrary position under the cursor. I am unable to figure out how should i move my camera forward and backward such that point under cursor remains at the same screen coordinates after zoom in/out. Any help would be appreciated. Below are the images which show the desired effect. Just to mention, I am working in a perspective mode with eye target and up vectors to control camera. Same effect i found in google sketchup and 'zoom to mouse position' setting in blender.

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >