Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 507/1071 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Android performance/issues with Corona SDK?

    - by B5Fan74
    I know this is a fairly broad question. We are looking to develop a mobile game and want to use a multi-platform engine/SDK. We like what we see with Corona but in doing some reading, we are seeing a lot of references to poor performance on the 'droid platforms. I am unsure how much of this is still relevant? Many of the articles/posts/references/discussions vary in date from 18 months ago to earlier this year. Is there a reason we should not pursue Corona if Android support is important to us? The game is going to be 2D isometric view. Thanks!

    Read the article

  • Given a start and end point, how can I constrain the end point so the resulting line segment is horizontal, vertical, or 45 degrees?

    - by GloryFish
    I have a grid of letters. The player clicks on a letter and drags out a selection. Using Bresenham's Algorithm I can create a line of highlighted letters representing the player's selection. However, what I really want is to have the line segment be constrained to 45 degree angles (as is common for crossword-style games). So, given a start point and an end point, how can I find the line that passes through the start point and is closest to the end point? Bonus: To make things super sweet I'd like to get a list of points in the grid that the line passes through, and for super MEGA bonus points, I'd like to get them in order of selection (i.e. from start point to end point).

    Read the article

  • How can I make Maya export a mesh as double-sided?

    - by bobobobo
    I'm exporting from Maya 2009 to OBJ. The mesh I'm exporting has in it's Render Stats "Double Sided" checked, but when the polygon is exported, only a single side is actually exported. What really needs to happen is for each polygon that is double sided, two polygons need to be exported, facing in opposite directions.. I can do this manually, but is there a way to make the OBJ exporter do it for me?

    Read the article

  • Why do I have an error when adding states in slick?

    - by SystemNetworks
    When I was going to create another state I had an error. This is my code: public static final int play2 = 3; and public Game(String gamename){ this.addState(new mission(play2)); } and public void initStatesList(GameContainer gc) throws SlickException{ this.getState(play2).init(gc, this); } I have an error in the addState. above the above code. I don't know where is the problem. But if you want the whole code it is here: package javagame; import org.newdawn.slick.*; import org.newdawn.slick.state.*; public class Game extends StateBasedGame{ public static final String gamename = "NET FRONT"; public static final int menu = 0; public static final int play = 1; public static final int train = 2; public static final int play2 = 3; public Game(String gamename){ super(gamename); this.addState(new Menu(menu)); this.addState(new Play(play)); this.addState(new train(train)); this.addState(new mission(play2)); } public void initStatesList(GameContainer gc) throws SlickException{ this.getState(menu).init(gc, this); this.getState(play).init(gc, this); this.getState(train).init(gc, this); this.enterState(menu); this.getState(play2).init(gc, this); } public static void main(String[] args) { try{ AppGameContainer app =new AppGameContainer(new Game(gamename)); app.setDisplayMode(1500, 1000, false); app.start(); }catch(SlickException e){ e.printStackTrace(); } } } //SYSTEM NETWORKS(C) 2012 NET FRONT

    Read the article

  • PhysX Capsule Character Controller floating above ground

    - by Jannie
    I am using PhysX Version 3.0.2 in the simulation package I'm working on, and I've encountered some bizarre behavior with the capsule character controller. When I set the controller's height and radius to the appropriate values (r = 0.25, h = 1.86)it behaves correctly (moving along the ground, colliding with other objects, and so on) except that the capsule itself is floating above the ground. The actor will then bump his head when trying to get through a door, since the capsule is the correct height but also floating above the ground. This image should illustrate what I'm going on about: One can clearly see that the rest of the scene has their collision bodies wrapped correctly, it's just the capsule that's going wrong! The stop-gap I've implemented is creating a smaller capsule and giving it an offset, but I need to implement ray-picking for the controller next so the capsule has to surround the character model properly. Here's my character creation code (with height = 1.86f and radius = 0.25f): NxController* D3DPhysXManager::CreateCharacterController( std::string l_stdsControllerName, float l_fHeight, float l_fRadius, D3DXVECTOR3 l_v3Position ) { NxCapsuleControllerDesc l_CapsuleControllerDescription; l_CapsuleControllerDescription.height = l_fHeight; l_CapsuleControllerDescription.radius = l_fRadius; l_CapsuleControllerDescription.position.set( l_v3Position.x, l_v3Position.y, l_v3Position.z ); l_CapsuleControllerDescription.callback = &this->m_ControllerHitReport; NxController* l_pController = this->m_pControllerManager->createController( this->m_pScene, l_CapsuleControllerDescription ); this->m_pControllerMap.insert( l_ControllerValuePair( l_stdsControllerName, l_pController ) ); return l_pController; } Any help at all would be appreciated, I just can't figure this one out! P.S. I've found a couple of (rather old) threads describing the same issue, but it seems they couldn't find a solution either. Here are the links: http://forum-archive.developer.nvidia.com/index.php?showtopic=6409 http://forum-archive.developer.nvidia.com/index.php?showtopic=3272 http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=23003

    Read the article

  • Alternatives to the GPL

    - by Bane
    I made a game, and I am currently making a game engine. I want them both to be completely free and open source. What license should I choose? I was reading a bit on GPL, but that seems to be more suited for system code and libraries, AFAIK, as it doesn't permit the use of code for proprietorial software - which, in turn, implies that the code can be used in the first place. I can see that, obviously, game engines can be considered libraries, and therefor be used, but what about game code? Is there an alternative to GPL?

    Read the article

  • Character with several colliders and rigidbodies

    - by Lautaro
    I am doing a PvP fighting game. This is the GameObject hierarchy of the player character. Player contains: Legs Sword Torso Head I want to be able to Register impacts of the sword on a specific body part Use AddForce on the whole player entity when a body part is struck Change the animation of the player that owns the sword that hit Questions Is it correct that the only rigidbody should be on the root Player GameObject ? Is it correct that The body parts should have colliders and be triggers ? Is it correct that The swords should have colliders but not be trigger ?

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • Points around a circumference C#

    - by Lautaro
    Im trying to get a list of vectors that go around a circle, but i keep getting the circle to go around several times. I want one circel and the dots to be placed along its circumference. I want the first dot to start at 0 and the last dot to end just before 360. Also i need to be able to calculate the spacing by the ammount of points. List<Vector2> pointsInPath = new List<Vector2>(); private int ammountOfPoints = 5; private int blobbSize = 200; private Vector2 topLeft = new Vector2(100, 100); private Vector2 blobbCenter; private int endAngle = 50; private int angleIncrementation; public Blobb() { blobbCenter = new Vector2(blobbSize / 2, blobbSize / 2) + topLeft; angleIncrementation = endAngle / ammountOfPoints; for (int i = 0; i < ammountOfPoints; i++) { pointsInPath.Add(getPointByAngle(i * angleIncrementation, 100, blobbCenter)); // pointsInPath.Add(getPointByAngle(i * angleIncrementation, blobbSize / 2, blobbCenter)); } } private Vector2 getPointByAngle(float angle, float distance, Vector2 centre) { return new Vector2((float)(distance * Math.Cos(angle) ), (float)(distance * Math.Sin(angle))) + centre ; }

    Read the article

  • How can I replicate the look and limitations of the Super NES?

    - by Mikalichov
    I am looking to produce graphics with the same limitations / look that in the Super Nes era. I am specifically looking for graphics similar to Chrono Trigger / FF6. It would be a lot easier to do if I had an idea of the resolution / dpi I am supposed to use. I found that the technical specs for the SNES are: Progressive: 256 × 224, 512 × 224, 256 × 239, 512 × 239 Interlaced: 512 × 448, 512 × 478 But even by using these resolutions, it is pointless if I set it at 72dpi, as I will still have possibly very detailed graphics (that is the main thing, I don't want detailed graphics, I want to go pixelated). I figured it might be related to the sprite size limit, i.e.: Sprites can be 8 × 8, 16 × 16, 32 × 32, or 64 × 64 pixels, each using one of eight 16-color palettes and tiles from one of two blocks of 256 in VRAM. Up to 32 sprites and 34 8 × 8 sprite tiles may appear on any one line. This would work for sprites (characters, objects), but what about maps? Are they built entirely from 8x8 tiles? And then, at what resolution is the end result displayed? It might seem like I am giving the question and answers at the same time, but all of these are suppositions I am making, so could someone confirm or correct them?

    Read the article

  • Selection of a mesh with arbitrary region

    - by Tigran
    Considering example: I have a mesh(es) on the OpenGL screen and would like to select a part of it (say for delete purpose). There is a clear way to do the selction via Ray Tracing, or via Selection provided by OpenGL itself. But, for my users, considering that meshes can get wired surfaces, I need to implement a selection via a Arbitrary closed region, so all triangles that appears present inside that region has to be selected. To be more clear, here is screen shot: I want all triangles inside black polygon to be selected, identified, whatever in some way. How can I achieve that ?

    Read the article

  • How to highlight non-rectengular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map; then the author color-maps the imaginary countries; which we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over; though to get the effect, he creates unique border assets for each country - like; So for the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a great work for me - as I've 25+ hot-spots for each map - further more the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • 3D Collision help

    - by Taylor
    I'm having difficulties with my project. I'm quite new in XNA. Anyway, I'm trying to make 3D game and I'm already stuck on one basic thing. I have terrain made from a heightmap, and an avatar model. I want to set up some collisions for game so the player won't go through the ground. But I just don't know how to detect collisions for so complex an object. I could just make a simple box collision for my avatar, but what about the ground? I already implemented the JigLibX physics engine in my project and I know that I can make a collision map with heightmap, but I can't find any tutorials or help with this. So how can I set proper collision for complex objects? How can I detect heightmap collisions in JigLibX? Just some links to tutorials would be enough. Thanks in advance!

    Read the article

  • Blender 2.64, what are the actual hot-keys for certain actions

    - by Shivan Dragon
    I know this sounds mega lame but I've looked for hotkeys for certain actions, first in the appliation's User Settings (where I didn't find them) then in the official documentation (where I did find some of them but they're not the right ones): http://wiki.blender.org/index.php/Doc:2.4/Manual/3D_interaction/Transform_Control/Manipulators (Ctrl - Alt - S is recommended for Scale, but instead it opens the Save As... window - I think these changed in the latest versions, but they forgot to update the docs) So then, what are the hot keys for: selecting translate manipulator selecting rotate manipulator selecting scale manipulator In Edit mode: select vertex (editing) select edges (editing) select faces (editing) thanks.

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • Game Editor - When screen is clicked, how do you identify which object that is clicked?

    - by Deukalion
    I'm trying to create a Game Editor, currently just placing different types of Shapes and such. I'm doing this in Windows Forms while drawing the 3D with XNA. So, if I have a couple of Shapes on the screen and I click the screen I want to be able to identify "which" of these objects you clicked. What is the best method for this? Since having two objects one behind the other, it should be able to recognize the one in front and not the one behind it and also if I rotate the camera and click on the one behind it - it should identify it and not the first one. Are there any smart ways to go about this?

    Read the article

  • How can I get into the educational market?

    - by mmyers
    I believe that my current game project is very well-suited for educational gaming; so well-suited, in fact, that I know of several different schools (one community college and at least one or two high schools) that have used versions of it at some time or another. And that's without any such marketing on my part. I'd like to expand on this part of the potential user base. But I have absolutely no experience in dealing with school administrations. How can I break into this market enough to be noticed? And on a side note, could marketing the game as educational kill the gamers market?

    Read the article

  • Image first loaded, then it isn't? (XNA)

    - by M0rgenstern
    I am very confused at the Moment. I have the following Class: (Just a part of the class): public class GUIWindow { #region Static Fields //The standard image for windows. public static IngameImage StandardBackgroundImage; #endregion } IngameImage is just one of my own classes, but actually it contains a Texture2D (and some other things). In another class I load a list of GUIButtons by deserializing a XML file. public static GUI Initializazion(string pXMLPath, ContentManager pConMan) { GUI myGUI = pConMan.Load<GUI>(pXMLPath); GUIWindow.StandardBackgroundImage = new IngameImage(pConMan.Load<Texture2D>(myGUI.WindowStandardBackgroundImagePath), Vector2.Zero, 1024, 600, 1, 0, Color.White, 1.0f, true, false, false); System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); myGUI.Windows = pConMan.Load<List<GUIWindow>>(myGUI.GUIFormatXMLPath); System.Console.WriteLine("Windows loaded"); return myGUI; } Here this line: System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Prints "true". To load the GUIWindows I need an "empty" constructor, which looks like that: public GUIWindow() { Name = ""; Buttons = new List<Button>(); ImagePath = ""; System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); //Image = new IngameImage(StandardBackgroundImage); //System.Console.WriteLine( //Image.IsActive = false; SelectedButton = null; IsActive = false; } As you can see, I commented lines out in the constructor. Because: Otherwise this would crash. Here the line System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Doesn't print anything, it just crashes with the following errormessage: Building content threw NullReferenceException: Object reference not set to an object instance. Why does this happen? Before the program wants to load the List, it prints "true". But in the constructor, so in the loading of the list it prints "false". Can anybody please tell me why this happens and how to fix it?

    Read the article

  • what is the easiest way to make a hitbox that rotates with it's texture

    - by Matthew Optional Meehan
    In xna when you have a sprite that doesnt rotate it's very easy to get the four corner of a sprite to make a hitbox, but when you do a rotation the points get moved and I assume there is some kind of math that I can use to aquire them. I am using the four points to draw a rectangle that visually represents the hitboxes. I have seen some per-pixel collission examples but I can forsee they would be hard to draw a box/'convex hull' around. I have also seen physics like farseer but I'm not sure if there is a quick tutorial to do what I want. What do you guys think is the best approach becuase I am looking to complete this work by the end of the week.

    Read the article

  • Calculating the rotational force of a 2D sprite

    - by Jon
    I am wondering if someone has an elegant way of calculating the following scenario. I have an object of (n) number of squares, random shapes, but we will pretend they are all rectangles. We are dealing with no gravity, so consider the object in space, from a top down perspective. I am applying a force to the object at a specific square (as illustrated below). How do I calculate the rotational angle, based on the force being applied, at the location being applied. If applied in the center square, it would go straight. How should it behave the further I move from the center? How do I calculate the rotational velocity?

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >