Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 545/1529 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • How can I attach a model to the bone of another model?

    - by kaykayman
    I am trying to attach one animated model to one of the bones of another animated model in an XNA game. I've found a few questions/forum posts/articles online which explain how to attach a weapon model to the bone of another model (which is analogous to what I'm trying to achieve), but they don't seem to work for me. So as an example: I want to attach Model A to a specific bone in Model B. Question 1. As I understand it, I need to calculate the transforms which are applied to the bone on Model B and apply these same transforms to every bone in Model A. Is this right? Question 2. This is my code for calculating the Transforms on a specific bone. private Matrix GetTransformPaths(ModelBone bone) { Matrix result = Matrix.Identity; while (bone != null) { result = result * bone.Transform; bone = bone.Parent; } return result; } The maths of Matrices is almost entirely lost on me, but my understanding is that the above will work its way up the bone structure to the root bone and my end result will be the transform of the original bone relative to the model. Is this right? Question 3. Assuming that this is correct I then expect that I should either apply this to each bone in Model A, or in my Draw() method: private void DrawModel(SceneModel model, GameTime gametime) { foreach (var component in model.Components) { Matrix[] transforms = new Matrix[component.Model.Bones.Count]; component.Model.CopyAbsoluteBoneTransformsTo(transforms); Matrix parenttransform = Matrix.Identity; if (!string.IsNullOrEmpty(component.ParentBone)) parenttransform = GetTransformPaths(model.GetBone(component.ParentBone)); component.Player.Update(gametime.ElapsedGameTime, true, Matrix.Identity); Matrix[] bones = component.Player.GetSkinTransforms(); foreach (SkinnedEffect effect in mesh.Effects) { effect.SetBoneTransforms(bones); effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(model.Angle)) * Matrix.CreateTranslation(model.Position) * parenttransform; effect.View = getView(); effect.Projection = getProjection(); effect.Alpha = model.Opacity; } } mesh.Draw(); } I feel as though I have tried every conceivable way of incorporating the parenttransform value into the draw method. The above is my most recent attempt. Is what I'm trying to do correct? And if so, is there a reason it doesn't work? The above Draw method seems to transpose the models x/z position - but even at these wrong positions, they do not account for the animation of Model B at all. Note: As will be evident from the code my "model" is comprised of a list of "components". It is these "components" that correspond to a single "Microsoft.Xna.Framework.Graphics.Model"

    Read the article

  • What is wrong with my specular phong shading

    - by Thijser
    I'm sorry if this should be placed on stackoverflow instead however seeing as this is graphics related I was hoping you guys could help me: I'm attempting to write a phong shader and currently working on the specular. I came acros the following formula: base*pow(dot(V,R),shininess) and attempted to implement it (V is the posion of the viewer and R the reflective vector). This gave the following result and code: Vec3Df phongSpecular(const Vec3Df & vertexPos, Vec3Df & normal, const Vec3Df & lightPos, const Vec3Df & cameraPos, unsigned int index) { Vec3Df relativeLightPos=(lightPos-vertexPos); relativeLightPos.normalize(); Vec3Df relativeCameraPos= (cameraPos-vertexPos); relativeCameraPos.normalize(); int DotOfNormalAndLight = Vec3Df::dotProduct(normal,relativeLightPos); Vec3Df reflective =(relativeLightPos-(2*DotOfNormalAndLight*normal))*-1; reflective.normalize(); float phongyness= Vec3Df::dotProduct(reflective,relativeCameraPos); if (phongyness<0){ phongyness=0; } float shininess= Shininess[index]; float speculair = powf(phongyness,shininess); return Ks[index]*speculair; } I'm looking for something more like this:

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Is there an open source sports manager project?

    - by massive
    For a long time I've tried to search for an open source manager game, but without any luck. I'm looking for a suitable project for a reference to my own pet project. Features like well designed data model, tournament and fixture generation and understandable match simulation algorithm would be a great bonuses. I'm especially interested in game projects like Hattrick and SI Games' Football Manager, although it is irrelevant what the particular sport is. The project should be preferably web-based as Hattrick is. I've crawled through GitHub and SourceForge, but I found only a few sports simulation projects. Projects, which I have found, were either dead or not fulfilling my wishes. Do you know any open source manager game / fantasy sports game project, which would be available as open source, OR at least any material, which would be useful when building a such project?

    Read the article

  • Incorporating XNA into an existing project

    - by Boreal
    My game as-is is using IrrlichtLime, which I'm beginning to dislike because it hides a lot of implementation and makes adding your own implementation incredibly complex. I don't really need the scene manager for anything and the only animation I need is manual (i.e. transforming the bones programmatically). However, I've only ever used XNA in the past as a starting point with the templates. How would I take my current project and add XNA to it?

    Read the article

  • How to draw texture to screen in Unity?

    - by user1306322
    I'm looking for a way to draw textures to screen in Unity in a similar fashion to XNA's SpriteBatch.Draw method. Ideally, I'd like to write a few helper methods to make all my XNA code work in Unity. This is the first issue I've faced on this seemingly long journey. I guess I could just use quads, but I'm not so sure it's the least expensive way performance-wise. I could do that stuff in XNA anyway, but they made SpriteBatch not without a reason, I believe.

    Read the article

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • What's a good entity hierarchy for a 2D game?

    - by futlib
    I'm in the process of building a new 2D game out of some code I wrote a while ago. The object hierarchy for entities is like this: Scene (e.g. MainMenu): Contains multiple entities and delegates update()/draw() to each Entity: Base class for all things in a scene (e.g. MenuItem or Alien) Sprite: Base class for all entities that just draw a texture, i.e. don't have their own drawing logic Does it make sense to split up entities and sprites up like that? I think in a 2D game, the terms entity and sprite are somewhat synonymous, right? But I do believe that I need some base class for entities that just draw a texture, as opposed to drawing themselves, to avoid duplication. Most entities are like that. One weird case is my Text class: It derives from Sprite, which accepts either the path of an image or an already loaded texture in its constructor. Text loads a texture in its constructor and passes that to Sprite. Can you outline a design that makes more sense? Or point me to a good object-oriented reference code base for a 2D game? I could only find 3D engine code bases of decent code quality, e.g. Doom 3 and HPL1Engine.

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • Need help with a complex 3d scene (using Ogre and bullet)

    - by Matthias
    In my setup there is a box with a hole on one side, and a freely movable "stick" (or bar, tube). This stick can be inserted/moved through the hole into the box. This hole is exactly as wide as the diameter of the stick. In reality, when you would now hold the end of the stick in your hand and move the hand left/right or up/down, the other end of the stick, which is inside the box, would move into the opposite direction of your hand movement (because the stick is affixed at the pivot point where it is entering the box through the hole). (I hope you understand what I mean so far.) Now I need to simulate such a setup in a 3d program. I have already successfully developed an Ogre3d framework for this application, including bullet. But what I don't know is how I can implement in my program what I have described above. This application must include two more features: The scene camera is attached to the end of the stick that is inserted into the box. So when the user would move the mouse (to control "his" end of the stick outside the box), then the camera attached to the stick would move in the opposite direction, as described above. The stick has some length, and the user can push it further into the box, or pull it closer to him again. That means of course that the max. radius on which the end of the stick inside the box can move depends on how far the stick is pushed into the box. Thus, the more the stick is pushed into the box, the larger the max. radius of this end of the stick with the camera will be. I understand this is maybe quite a complex thing, so I don't expect any real source code here. I already have the Ogre and bullet part as said up and running, as well as a camera attached to the stick. This works fine. What I don't know though is how I can simulate the setup described above. Especially the requirement that the stick is affixed at the position of the hole on the box, where it is inserted into the box. Any ideas how I could approach to implement the described setup?

    Read the article

  • Cocos2d copied actions not responding?

    - by Stephen
    I am running an animation on 2 sprites like so: -(void) startFootballAnimation { CCAnimation* footballAnim = [CCAnimation animationWithFrame:@"Football" frameCount:60 delay:0.005f]; spiral = [CCAnimate actionWithAnimation:footballAnim]; CCRepeatForever* repeat = [CCRepeatForever actionWithAction:spiral]; [self runAction:repeat]; [secondFootball runAction:[[repeat copy] autorelease]]; } The problem I am having is I call this method: - (void) slowAnimation { [spiral setDuration:[spiral duration] + 0.01]; } and it only slows down the first sprites animation and not the second one. Do I need to do something different with copied actions to get them to react to the slowing of the animation?

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • What's the difference between Canvas and WebGL?

    - by gadr90
    I'm thinking about using CAAT as a part of a HTML5 game engine. One of it's features is the ability to render to Canvas and WebGL without changing anything in the client code. That is a good thing, but I haven't found precisely: what are the differences between those two technologies? I would specially like to know the differences of Canvas and WebGL in the following regards: Framerate Desktop browser support Mobile browser support Futureproofability (TM)

    Read the article

  • Will polishing my current project be a better learning experience than starting a new one?

    - by Alejandro Cámara
    I started programming many years ago. Now I'm trying to make games. I have read many recommendations to start cloning some well known games like galaga, tetris, arkanoid, etc. I have also read that I should go for the whole game (including menus, sound, score, etc.). Yesterday I finished the first complete version of my arkanoid clone. But it is far from over. I can still work on it for months (I program as a hobby in my free time) implementing a screen resolution switcher, remap of the control keys, power-ups falling from broken bricks, and a huge etc. But I do not want to be forever learning how to clone ONE game. I have the urge to get to the next clone in order to apply some design ideas I have come upon while developing this arkanoid clone (at the same time I am reading the GoF book and much source code from Ludum Dare 21 game contest). So the question is: Should I keep improving the arkanoid clone until it has all the features the original game had? or should I move to the next clone (there are almost infinite games to clone) and start mending the things I did wrong with the previous clone? This can be a very subjective question, so please restrain the answers to the most effective way to learn how to make my own games (not cloning someone ideas). Thank you! CLARIFICATION In order to clarify what I have implemented I make this list: Features implemented: Bouncing capabilities (the ball bounces on walls, on bricks, and on the bar). Sounds when bouncing on bricks and the bar, and when the player wins or loses. Basic title menu (new game and exit only). Also in-game menu and win/lose menus. Only three levels, but the map system is so easy I do not think it will teach me much (am I wrong?). Features not-implemented: Power-ups when breaking the bricks. Complex bricks (with more than one "hit point" and invincible). Better graphics (I am not really good at it). Programming polishing (use more intensively the design patterns). Here's a link to its (minimal) webpage: http://blog.acamara.es/piperine/ I kind of feel ashamed to show it, so please do not hit me too hard :-) My question was related to the not-implemented features. I wondered what was the fastest (optimal) path to learn. 1) implement the not-implemented features in this project which is getting big, or 2) make a new game which probably will teach me those lessons and new ones. ANSWER I choose @ashes999 answer because, in my case, I think I should polish more and try to "ship" the game. I think all the other answers are also important to bear in mind, so if you came here having the same question, before taking a rush decision read all the discussion. Thank you all!

    Read the article

  • From simple physics with a ball, to a more complicated shape

    - by Maximus
    Hello fellow game devs and stack overflowers... I recently made a transition from OpenGL ES 1.1 to 2.0 (on Android via NDK) and things are going well so far. I'm working on doing a dice rolling application (gaming dice up to 20 sided, not just regular 6 sided die) as a way to learn more about how physics is implemented in a gaming environment. I've explored implementing existing engine options (such as Bullet) and I don't think I need to implement something quite so sophisticated. I've found several tutorials that handle a lot of the general physics involved with initial trajectory, velocity, angle of contact and reflection angle, etc. I'm confident that I'd be able to implement ball-like behavior without much trouble. My question lies in when I attempt to make the interaction of the die shape with another surface more "realistic," for example... the die strikes the floor surface at such an angle where only one corner makes contact with the floor. In my mind, the center of gravity of the object would play a part in determining how the die bounces away, possibly even spinning it it faster, etc... but I am not sure what the actual math involved is. Are there any recommended resources for getting into this level of detail? Initial searches haven't turned up much... Thanks to everyone in the community, -Jeremiah

    Read the article

  • text extraction from video game dialogue files [on hold]

    - by wdwvt1
    As part of an academic project, I am trying to access the dialogue files (whether audio or text) from a variety of sports video games (Madden or NBA 2kX would be fantastic). I have searched extensively on other sites (scholarly text-mining publications, r/gaming, r/madden, modding sites, etc.) for guidance in how to extract dialogue files, but have been unsuccessful. Given that I don't have even the domain specific language to ask the right question (i.e. the resources I am seeking are out there, I just can't find them) I am asking the SE game dev community for help with the 2 following questions: Is there a canonical resource that I should study that would get me started with how to extract text or audio files from games? I am very fluent in python, which usually excels at mining information from sources, but I struggle with knowing where to start with a video game (as opposed to a more familiar database with a defined API). Is this even feasible, or are protections included with newer games (e.g. NBA 2k13) going to make extraction of these resources in a programmatic way impossible? Thank you for your help!

    Read the article

  • Stop map from scrolling but let player still move?

    - by ChocoMan
    I have a basic method of scrolling around on a map (moving the map instead of the player), but at when the player gets to a certain proximity to the edge, how do you stop the map from scrolling, but still allow the player to move around until it is away from that proximity? I'm not looking for any code. Just a suggestion so that I can implement it myself. I can see it visually (creating 4 boxed intersecting boundaries for the player to enter), but not sure how to come about stopping and resuming the scrolling of the map.

    Read the article

  • Java 2D Rectangle Collision? [on hold]

    - by Andreas Elia
    I am just wanting to know of another (longer OR shorter) way of getting 100% effective collisions on a 2D plat-former. The current collision system that is in place works from coords on the level and does not always work reliably. Thank you in advance for any help/support. The current system draws a rectangle and is checking to see if any two points collide. From testing, the system can sometimes "glitch" and allow the player to collide into walls etc. Player Class http://pastebin.com/2zE8vz8R Main Class http://pastebin.com/A6Utb3ti

    Read the article

  • x axis detection issues platformer starter kit

    - by dbomb101
    I've come across a problem with the collision detection code in the platformer starter kit for xna.It will send up the impassible flag on the x axis despite being nowhere near a wall in either direction on the x axis, could someone could tell me why this happens ? Here is the collision method. /// <summary> /// Detects and resolves all collisions between the player and his neighboring /// tiles. When a collision is detected, the player is pushed away along one /// axis to prevent overlapping. There is some special logic for the Y axis to /// handle platforms which behave differently depending on direction of movement. /// </summary> private void HandleCollisions() { // Get the player's bounding rectangle and find neighboring tiles. Rectangle bounds = BoundingRectangle; int leftTile = (int)Math.Floor((float)bounds.Left / Tile.Width); int rightTile = (int)Math.Ceiling(((float)bounds.Right / Tile.Width)) - 1; int topTile = (int)Math.Floor((float)bounds.Top / Tile.Height); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / Tile.Height)) - 1; // Reset flag to search for ground collision. isOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { // If this tile is collidable, TileCollision collision = Level.GetCollision(x, y); if (collision != TileCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = Level.GetBounds(x, y); Vector2 depth = RectangleExtensions.GetIntersectionDepth(bounds, tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY < absDepthX || collision == TileCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (previousBottom <= tileBounds.Top) isOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == TileCollision.Impassable || IsOnGround) { // Resolve the collision along the Y axis. Position = new Vector2(Position.X, Position.Y + depth.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } //This is the section which deals with collision on the x-axis else if (collision == TileCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. Position = new Vector2(Position.X + depth.X, Position.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } } } } // Save the new bounds bottom. previousBottom = bounds.Bottom; }

    Read the article

  • How can I apply different actions to different parts of a 2D character?

    - by Praveen Sharath
    I am developing a 2D platform game in Java. The player has a gun in his hand every time. He needs to walk and shoot with the gun(arrow keys for walk and X key to shoot). The walk cycle takes 6 frames and i am able to import the sprite sheet and animate the sequence when I press arrow key. But i need to add the gun motion. The player holds the gun upwards and when X key is pressed he brings it straight and shoots. How to implement the walk + shoot action?

    Read the article

  • Help with this optimization

    - by Milo
    Here is what I do: I have bitmaps which I draw into another bitmap. The coordinates are from the center of the bitmap, thus on a 256 by 256 bitmap, an object at 0.0,0.0 would be drawn at 128,128 on the bitmap. I also found the furthest extent and made the bitmap size 2 times the extent. So if the furthest extent is 200,200 pixels, then the bitmap's size is 400,400. Unfortunately this is a bit inefficient. If a bitmap needs to be drawn at 500,500 and the other one at 300,300, then the target bitmap only needs to be 200,200 in size. I cannot seem to find a correct way to draw in the components correctly with a reduced size. I figure out the target bitmap size like this: float AvatarComposite::getFloatWidth(float& remainder) const { float widest = 0.0f; float widestNeg = 0.0f; for(size_t i = 0; i < m_components.size(); ++i) { if(m_components[i].getSprite() == NULL) { continue; } float w = m_components[i].getX() + ( ((m_components[i].getSprite()->getWidth() / 2.0f) * m_components[i].getScale()) / getWidthToFloat()); float wn = m_components[i].getX() - ( ((m_components[i].getSprite()->getWidth() / 2.0f) * m_components[i].getScale()) / getWidthToFloat()); if(w > widest) { widest = w; } if(wn > widest) { widest = wn; } if(w < widestNeg) { widestNeg = w; } if(wn < widestNeg) { widestNeg = wn; } } remainder = (2 * widest) - (widest - widestNeg); return widest - widestNeg; } And here is how I position and draw the bitmaps: int dw = m_components[i].getSprite()->getWidth() * m_components[i].getScale(); int dh = m_components[i].getSprite()->getHeight() * m_components[i].getScale(); int cx = (getWidth() + (m_remainderX * getWidthToFloat())) / 2; int cy = (getHeight() + (m_remainderY * getHeightToFloat())) / 2; cx -= m_remainderX * getWidthToFloat(); cy -= m_remainderY * getHeightToFloat(); int dx = cx + (m_components[i].getX() * getWidthToFloat()) - (dw / 2); int dy = cy + (m_components[i].getY() * getHeightToFloat()) - (dh / 2); g->drawScaledSprite(m_components[i].getSprite(),0.0f,0.0f, m_components[i].getSprite()->getWidth(),m_components[i].getSprite()->getHeight(),dx,dy, dw,dh,0); I basically store the difference between the original 2 * longest extent bitmap and the new optimized one, then I translate by that much which I would think would cause me to draw correctly but then some of the components look cut off. Any insight would help. Thanks

    Read the article

  • Good resources for 2.5D and rendering walls, floors, and sprites

    - by Aidan Mueller
    I'm curious as to how games like Prelude of the chambered handle graphics. If you play for a bit you will see what I mean. It made me wonder how it works. (it is open-source so you can get the source on This page) I did find a few tutorials but I couldn't undertand some of the stuff but it did help with some things. However, I don't like doing things I don't understand. Does anyone know of any good sites for this kind of 2.5D? Any help is appreciated. After all I've been googling all day. Thanks :)

    Read the article

  • 2D Tile Based Collision Detection

    - by MrPlosion1243
    There are a lot of topics about this and it seems each one addresses a different problem, this topic does the same. I was looking into tile collision detection and found this where David Gouveia explains a great way to get around the person's problem by separating the two axis. So I implemented the solution and it all worked perfectly from all the testes I through at it. Then I implemented more advanced platforming physics and the collision detection broke down. Unfortunately I have not been able to get it to work again which is where you guys come in :)! I will present the code first: public void Update(GameTime gameTime) { if(Input.GetKeyDown(Keys.A)) { velocity.X -= moveAcceleration; } else if(Input.GetKeyDown(Keys.D)) { velocity.X += moveAcceleration; } if(Input.GetKeyDown(Keys.Space)) { if((onGround && isPressable) || (!onGround && airTime <= maxAirTime && isPressable)) { onGround = false; airTime += (float)gameTime.ElapsedGameTime.TotalSeconds; velocity.Y = initialJumpVelocity * (1.0f - (float)Math.Pow(airTime / maxAirTime, Math.PI)); } } else if(Input.GetKeyReleased(Keys.Space)) { isPressable = false; } if(onGround) { velocity.X *= groundDrag; velocity.Y = 0.0f; } else { velocity.X *= airDrag; velocity.Y += gravityAcceleration; } velocity.Y = MathHelper.Clamp(velocity.Y, -maxFallSpeed, maxFallSpeed); velocity.X = MathHelper.Clamp(velocity.X, -maxMoveSpeed, maxMoveSpeed); position += velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; position = new Vector2((float)Math.Round(position.X), (float)Math.Round(position.Y)); if(Math.Round(velocity.X) != 0.0f) { HandleCollisions2(Direction.Horizontal); } if(Math.Round(velocity.Y) != 0.0f) { HandleCollisions2(Direction.Vertical); } } private void HandleCollisions2(Direction direction) { int topTile = (int)Math.Floor((float)Bounds.Top / Tile.PixelTileSize); int bottomTile = (int)Math.Ceiling((float)Bounds.Bottom / Tile.PixelTileSize) - 1; int leftTile = (int)Math.Floor((float)Bounds.Left / Tile.PixelTileSize); int rightTile = (int)Math.Ceiling((float)Bounds.Right / Tile.PixelTileSize) - 1; for(int x = leftTile; x <= rightTile; x++) { for(int y = topTile; y <= bottomTile; y++) { Rectangle tileBounds = new Rectangle(x * Tile.PixelTileSize, y * Tile.PixelTileSize, Tile.PixelTileSize, Tile.PixelTileSize); Vector2 depth; if(Tile.IsSolid(x, y) && Intersects(tileBounds, direction, out depth)) { if(direction == Direction.Horizontal) { position.X += depth.X; } else { onGround = true; isPressable = true; airTime = 0.0f; position.Y += depth.Y; } } } } } From the code you can see when velocity.X is not equal to zero the HandleCollisions() Method is called along the horizontal axis and likewise for the vertical axis. When velocity.X is not equal to zero and velocity.Y is equal to zero it works fine. When velocity.Y is not equal to zero and velocity.X is equal to zero everything also works fine. However when both axis are not equal to zero that's when it doesn't work and I don't know why. I basically teleport to the left side of a tile when both axis are not equal to zero and there is a air block next to me. Hopefully someone can see the problem with this because I sure don't as far as I'm aware nothing has even changed from what I'm doing to what the linked post's solution is doing. Thanks.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >