Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 553/1039 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • Cross-platform builds with OGRE3D via CMake. Any tips?

    - by frarees
    I've been trying to compile a simple project for both OSX and Windows platforms, using OGRE3D, but I've got some problems on the way. I'm using CMake to create my platform specific project files (VS solution & Xcode project). Some problems I found are: OGRE3D source is distributed in 2 flavors, Windows sources and UNIX/OSX sources. In OSX, compiling dependencies (freetype, FreeImage and specially OIS) is such a pain. I don't know how to handle precompiled dependencies (they exist for both Win & Mac). May sound like a noob question, but I would appreciate some tips on this. Resources, forum posts, anything. There exists any "cross-platform base project for OGRE3D" on the net? Would be really helpful if someone who already managed to do this can bring some light. Btw, I'm not basing the project on OGRE3D, it's just that is the biggest library I'm probably using, so I depend a lot on it. Thanks in advantage!

    Read the article

  • How do I properly implement zooming in my game?

    - by Rudy_TM
    I'm trying to implement a zoom feature but I have a problem. I am zooming in and out a camera with a pinch gesture, I update the camera each time in the render, but my sprites keep their original position and don't change with the zoom in or zoom out. The Libraries are from libgdx. What am I missing? private void zoomIn() { ((OrthographicCamera)this.stage.getCamera()).zoom += .01; } public boolean pinch(Vector2 arg0, Vector2 arg1, Vector2 arg2, Vector2 arg3) { // TODO Auto-generated method stub zoomIn(); return false; } public void render(float arg0) { this.gl.glClear(GL10.GL_DEPTH_BUFFER_BIT | GL10.GL_COLOR_BUFFER_BIT); ((OrthographicCamera)this.stage.getCamera()).update(); this.stage.draw(); } public boolean touchDown(int arg0, int arg1, int arg2) { this.stage.toStageCoordinates(arg0, arg1, point); Actor actor = this.stage.hit(point.x, point.y); if(actor instanceof Group) { ((LevelSelect)((Group) actor).getActors().get(0)).touched(); } return true; } Zoom In Zoom Out

    Read the article

  • Keeping the meshes "thickness" the same when scaling an object

    - by user1806687
    I've been bashing my head for the past couple of weeks trying to find a way to help me accomplish, on first look very easy task. So, I got this one object currently made out of 5 cuboids (2 sides, 1 top, 1 bottom, 1 back), this is just for an example, later on there will be whole range of different set ups. Now, the thing is when the user chooses to scale the whole object this is what should happen: X scale: top and bottom cuboids should get scaled by a scale factor, sides should get moved so they are positioned just like they were before(in this case at both ends of top and bottom cuboids), back should get scaled so it fits like before(if I simply scale it by a scale factor it will leave gaps on each side). Y scale: sides should get scaled by a scale factor, top and bottom cuboid should get moved, and back should also get scaled. Z scale: sides, top and bottom cuboids should get scaled, back should get moved. Hope you can help, EDIT: So, I've decided to explain the situation once more, this time more detailed(hopefully). I've also made some pictures of how the scaling should look like, where is the problem and the wrong way of scaling. I this example I will be using a thick walled box, with one face missing, where each wall is made by a cuboid(but later on there will be diffrent shapes of objects, where a one of the face might be roundish, or triangle or even under some angle), scaling will be 2x on X axis. 1.This is how the default object without any scaling applied looks like: http://img856.imageshack.us/img856/4293/defaulttz.png 2.If I scale the whole object(all of the meshes) by some scale factor, the problem becomes that the "thickness" of the object walls also change(which I do not want): http://img822.imageshack.us/img822/9073/wrongwaytoscale.png 3.This is how the correct scaling should look like. Appropriate faces gets caled in this case where the scale is on X axis(top, bottom, back): http://imageshack.us/photo/my-images/163/rightwayxscale1.png/ 4.But the scale factor might not be the same for all object all of the times. In this case the back has to get scaled a bit more or it leaves gaps: http://imageshack.us/photo/my-images/9/problemwhenscaling.png/ 5.If everything goes well this is how the final object should look like: http://imageshack.us/photo/my-images/856/rightwayxscale2.png/ So, as you have might noticed there are quite a bit of things to look out when scaling. I am asking you, if any of you have any idea on how to accomplish this scaling. I have tried whole bunch of things, from scaling all of the object by the same scale factor, to subtracting and adding sizes to get the right size. But nothing I tried worked, if one mesh got scaled correctly then others didnt. Donwload the example object. English is not my first language, so I am really sorry if its hard to understand what I am saying.

    Read the article

  • Good tutorial resources for creating 2D character sprite?

    - by Rexroth
    I am planning on learning how to create 2D character sprite by myself and making a game using RPG Maker VX Ace. I've been searching for the tutorial of making approx. 32x64 size human character sprite but haven't been able to find one close enough. Most tutorials I've found are either really general or creating sprites that are way too complicated. FYI I wish to learn how to make this type of characters by myself: not too complicated, fit for a small fan-made game made by RPG Maker. Ideally, I wish to learn from the stage of character sketch until realizing the character using photoshop or other kinds of tools (I have some foundations of visual art, it's just that I am not sure how to sketch a character this small). If you know of such tutorial resource please let me know -- thank you very much!

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • How do you ensure consistent experience across multiple graphics cards (or even driver versions)?

    - by Grigory Javadyan
    So I was writing a simple 2D game with OpenGL and SDL and had this problem when there was awful tearing when running in windowed mode (even though I explicitly asked SDL_SetVideoMode to use double buffering). Didn't worry about it all too much because most of the time the game grabs the entire screen, windowed mode is just for debugging. Anyway, yesterday I updated my nVidia drivers and tearing disappeared, the game runs smooth and looks nice in windowed mode too. I can see how the problem may be in the graphics driver, but this leads to a question. Obviously, professional game developers have to deal with a lot of different hardware/software configurations. What are the techniques they use to make sure the game looks the roughly the same on different graphics cards or even the same model of graphics card, but with different driver versions?

    Read the article

  • Parsing glGetShaderInfoLog() to get error info. Is this reliable, or is there a better way?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information different to how it's formatted in the error string, and also show the line in error. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for "ERROR:" then read the next number up to : and then the next, and then the error description up to the next newline. But the OpenGL docs say "Application developers should not expect different OpenGL implementations to produce identical information logs." Which makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this? Thanks!

    Read the article

  • Multi-threaded JOGL Problem

    - by moeabdol
    I'm writing a simple OpenGL application in Java that implements the Monte Carlo method for estimating the value of PI. The method is pretty easy. Simply, you draw a circle inside a unit square and then plot random points over the scene. Now, for each point that is inside the circle you increment the counter for in points. After determining for all the random points wither they are inside the circle or not you divide the number of in points over the total number of points you have plotted all multiplied by 4 to get an estimation of PI. It goes something like this PI = (inPoints / totalPoints) * 4. This is because mathematically the ratio of a circle's area to a square's area is PI/4, so when we multiply it by 4 we get PI. My problem doesn't lie in the algorithm itself; however, I'm having problems trying to plot the points as they are being generated instead of just plotting everything at once when the program finishes executing. I want to give the application a sense of real-time display where the user would see the points as they are being plotted. I'm a beginner at OpenGL and I'm pretty sure there is a multi-threading feature built into it. Non the less, I tried to manually create my own thread. Each worker thread plots one point at a time. Following is the psudo-code: /* this part of the code exists in display() method in MyCanvas.java which extends GLCanvas and implements GLEventListener */ // main loop for(int i = 0; i < number_of_points; i++){ RandomGenerator random = new RandomGenerator(); float x = random.nextFloat(); float y = random.nextFloat(); Thread pointThread = new Thread(new PointThread(x, y)); } gl.glFlush(); /* this part of the code exists in run() method in PointThread.java which implements Runnable */ void run(){ try{ gl.glPushMatrix(); gl.glBegin(GL2.GL_POINTS); if(pointIsIn) gl.glColor3f(1.0f, 0.0f, 0.0f); // red point else gl.glColor3f(0.0f, 0.0f, 1.0f); // blue point gl.glVertex3f(x, y, 0.0f); // coordinates gl.glEnd(); gl.glPopMatrix(); }catch(Exception e){ } } I'm not sure if my approach to solving this issue is correct. I hope you guys can help me out. Thanks.

    Read the article

  • Dynamic Components

    - by Alex
    I am attempting to design a component-based architecture that allows Components to be dynamically enabled and disabled, much like the system employed by Unity3D. For example, all Components are implicitly enabled by default; however, if one desires to halt execution of code for a particular Component, one can disable it. Naively, I want to have a boolean flag in Component (which is an abstract class), and somehow serialize all method calls into strings, so that some sort of ComponentManager can check if a given Component is enabled/disabled before processing a method call on it. However, this is a pretty bad solution. I feel like I should employ some variation of the state paradigm, but I have yet to make progress. Any help would be greatly appreciated,

    Read the article

  • How to manage enemy movement and shoot in a shmup?

    - by whatever
    I'm wondering what is the best (or at least a good) way of managing enemies in a shoot-em-up. Basically, what I'd do would be a class that manages displaying and updating positions of all the enemies. But how to create good deplacements for enemies? A list of where-to-go points? gravitating around some fixed points (with ponderation, distance evaluation etc.)? Same question for the shoot patterns? Can you please put me on a track?

    Read the article

  • Draw "vision cone" / targetting element onto game world

    - by gkimsey
    I'm wanting to indicate various things using a "pie slice" sort of shape as below. Similar to vision cones in stealth game minimaps, or targetting indicators in RTS type games for frontal area attacks. Something generic enough to be used for both would be ideal. I need to be able to procedurally (and efficiently) change things like the slice width and length, color, transparency, position in the world, etc. For my particular situation, there's no concern with elevation, funky terrain, or really any third axis at all as far as this element is concerned. I have two first inclinations on how to accomplish this: 1) Manually generate the vertices for a main triangle, (possibly two, superimposed to get the border effect), a handful more to approximate the arc at the end, and roll it into a mesh. 2) Use some sort of 2D drawing library to create a circle and mask it off at the right angles, render to texture, and use that. For reference, I have some experience with Ogre3D, but I'm not attached to it as this is a mostly academic pursuit at the moment. Other technologies that might be better at accomplishing this are more than welcome. Finally, I'm kind of curious about how to do a "flashlight" or similar 3D effect that could produce the same result, but on all surfaces in the lit area.

    Read the article

  • Grid collision - finding the location of an entity in each box

    - by Gregg1989
    I am trying to implement grid-based collision in a 2d game with moving circles. The canvas is 400x400 pixels. Below you can see the code for my Grid class. What I want it to do is check inside which box the entities are located and then run a collision check if there are 2 or more entities in the same box. Right now I do not know how to find the position of an entity in a specific box. I know there are many tutorials online, but I haven't been able to find an answer to my question, because they are either written in C/C++ or use the 2d array approach. Code snippets and other help is greatly appreciated. Thanks. public class Grid { ArrayList<ArrayList<Entity>> boxes = new ArrayList<>(); double boxSize = 40; double boxesAmount = 10; ... ... public void checkBoxLocation(ArrayList<Entity> entities) { for (int i = 0; i < entities.size(); i++) { // Get top left coordinates of each entity double entityLeft = entities.get(i).getLayoutX() - entities.get(i).getRadius(); double entityTop = entities.get(i).getLayoutY() + entities.get(i).getRadius(); // Divide coordinate by box size to find the approximate location of the entity for (int j = 0; j < boxesAmount; j++) { //Select each box if ((entityLeft / boxSize <= j + 0.7) && (entityLeft / boxSize >= j)) { if ((entityTop / boxSize <= j + 0.7) && (entityTop / boxSize >= j)) { holdingBoxes.get(j).add(entities.get(i)); System.out.println("Entity " + entities.get(i) + " added to box " + j); } } } } } }

    Read the article

  • How to build list of items available in World of Warcraft?

    - by Cyclops
    There are a number of non-Blizzard sites that show a complete list of available items in World of Warcraft (such as wowhead, etc). I would like to know the best (easiest) way to compile a similar list. I believe some sites are based on user-entered input, which I would like to avoid. Looking at the lua API, it seems that there are functions to get a list of items, but it's not clear if I can just download everything (I remember a reference to throttling somewhere, can't find it now). Does anyone have code samples that would do this, or links to source? Also, Eve Online has made a database of items available (and I do mean SQL database file for download, not the Armory) - is there anything similar for Wow? I'm just looking for the names and stats, not the graphic icons.

    Read the article

  • Can GJK be used with the same "direction finding method" every time?

    - by the_Seppi
    In my deliberations on GJK (after watching http://mollyrocket.com/849) I came up with the idea that it ins not neccessary to use different methods for getting the new direction in the doSimplex function. E.g. if the point A is closest to the origin, the video author uses the negative position vector AO as the direction in which the next point is searched. If an edge (with A as an endpoint) is closest, he creates a normal vector to this edge, lying in the plane the edge and AO form. If a face is the feature closest to the origin, he uses even another method (which I can't recite from memory right now) However, while thinking about the implementation of GJK in my current came, I noticed that the negative direction vector of the newest simplex point would always make a good direction vector. Of course, the next vertex found by the support function could form a simplex that less likely encases the origin, but I assume it would still work. Since I'm currently experiencing problems with my (yet unfinished) implementation, I wanted to ask whether this method of forming the direction vector is usable or not.

    Read the article

  • How to avoid movement speed stacking when multiple keys are pressed?

    - by eren_tetik
    I've started a new game which requires no mouse, thus leaving the movement up to the keyboard. I have tried to incorporate 8 directions; up, left, right, up-right and so on. However when I press more than one arrow key, the movement speed stacks (http://gfycat.com/CircularBewitchedBarebirdbat). How could I counteract this? Here is relevant part of my code: var speed : int = 5; function Update () { if(Input.GetKey(KeyCode.UpArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.RightArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.LeftArrow)){ transform.rotation = Quaternion.AngleAxis(315, Vector3.up); } if(Input.GetKey(KeyCode.DownArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } }

    Read the article

  • Doing imagemagick like stuff in Unity (using a mask to edit a texture)

    - by Codejoy
    There is this tutorial in imagemagick http://www.imagemagick.org/Usage/masking/#masks I was wondering if there was some way to mimic the behavior (like cutting the image up based on a black image mask that turns image parts transparent... ) and then trim that image in game... trying to hack around with the webcam feature and reproduce some of the imagemagick opencv stuff in it in Unity but I am saddly unequipped with masks, shaders etc in unity skill/knowledge. Not even sure where to start.

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Sprite animation problem

    - by hustlerinc
    I have this sprite I have to animate. The sprite is 7 images total but animation is 10 frames (2 positions are repeated). The order I want to go through the frames is like this: 3 - 4 - 5 - 6 - 4 - 3 - 2 - 1 - 0 - 2. My problem is how can I skip 1 frame once I reach the end of each direction? The reason I want to skip is to save me from creating duplicate positions in the spritesheet. I'm doing this in Javascript.

    Read the article

  • How can I perform a masked erase in SDL2?

    - by Kvisle
    I'm trying to implement some shadow/lighting effects in my 2D-project, and I've concluded that if there is an easy way to perform a masked erase on an SDL_Texture, it would make the drawing operations quite cheap. Let's say I have a texture of the part of the level where light is not meant to be rendered. I also have a texture with my "light map"; I want to use this to just draw omni lights from my light sources. Then I want to use the first image to 'subtract' the portions of the light map that are not to be rendered on the final scene. Then I draw my "light map" texture on top of my scene, with additive blending enabled. This sounds like a good theory in my head, but I can't see any functions in the SDL2 API that let me do masked erase from a texture. Am I overlooking something? Does anything like this exist?

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • Game planning and software design? I feel that UML is not convenient

    - by user1542
    In my university, they always emphasize and hype about UML design and stuff, in which I feel it is not going to work well with game structure design. Now, I just want a professional advice on how should I begin my game designing? The story is I have some skill in programming and have done many minor game such as getting some 2D platformer working to some extend. The problems that I find about my program is the poor quality design. After coding for a while, things start to break down due to poor planning (When I add new feature, it tends to make me have to recode the whole program). However, to plan everything out without a single design flaw is a bit too ideal. Therefore, any advice to how should I plan my game? How should I put it into visible pictures, so that me and my friends are able to overview the designs? I planned to start coding a game with my friend. This is going to be my first teamwork, so any professional advices would be a pleasure. Is there any other alternatives than UML? Another question is how does "prototyping" normally looks like?

    Read the article

  • Best way to create neon glow line effect in AS3?

    - by Phil
    What's the best way to create a neon glow line effect in Flash AS3? Say similar to what's going on in the game gravitron360 on the xbox 360? Would it be a good idea to create movieclips with plain lines drawn in them and then apply a glow filter to them? or perhaps just apply the glow filter to the entire movieclip layer the movieclips are on? or just draw them manually and create a glow effect by converting the lines to fills and then softening edges? (wouldn't blend as well but would be the fastest CPU wise?) Thanks for any help

    Read the article

  • What are some ways of making manageable complex AI?

    - by Tetrad
    In the past I've used simple systems like finite state machines (FSMs) or hierarchical FSMs to control AI behavior. For any complex system, this pattern falls apart very quickly. I've heard about behavior trees and it seems like that's the next obvious step, but haven't seen a working implementation or really tried going down that route yet. Are there any other patterns to making manageable yet complex AI behaviors?

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >