Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 437/1016 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • What can make a peaceful game successful?

    - by Miro
    Today, the most successful games are action games like FPS, RPG, MMORPG... I'd like to make peaceful game, but I don't know how to attract people. I can make good graphics, but that's not the main thing that makes people like game more that couple of minutes. The content is important. In game styles mentioned in beginning are main content fight, kill others, make from yourself predator/the most powerful creature/player in the game. But what content can attract people in peaceful game?

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • Loading and drawing materials using Lib3ds

    - by Dfowj
    Hey all, i'm currently using Lib3ds to load models into my C++/OpenGL project. So far, i've been follow the model loading tutorial found here. The tutorial gives a good example of how to draw the vertices and normals using VBO's, but so far i've been lost as how to do the same thing with materials. Could i get an explanation/example of how to both load and draw materials of my meshes using Lib3ds and OpenGL?

    Read the article

  • What are the reasons for MMOs to have level caps [on hold]

    - by SamStephens
    In many MMOs players character progression is artificially capped, e.g. by level 60 or 90 or 100 or whatever. Why do MMOs have these level caps in the first place? Why not just allow characters to continue to arbitrary levels with a mathematically designed leveling system that keeps the leveling experience interesting and endless? Answers to this question may help us to see the reason behind the feature and decide if and how this should be implemented in our MMOs.

    Read the article

  • XNA, how to draw two cubes standing in line parallelly?

    - by user3535716
    I just got a problem with drawing two 3D cubes standing in line. In my code, I made a cube class, and in the game1 class, I built two cubes, A on the right side, B on the left side. I also setup an FPS camera in the 3D world. The problem is if I draw cube B first(Blue), and move the camera to the left side to cube B, A(Red) is still standing in front of B, which is apparently wrong. I guess some pics can make much sense. Then, I move the camera to the other side, the situation is like: This is wrong.... From this view, the red cube, A should be behind the blue one, B.... Could somebody give me help please? This is the draw in the Cube class Matrix center = Matrix.CreateTranslation( new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(0.5f); Matrix translate = Matrix.CreateTranslation(location); effect.World = center * scale * translate; effect.View = camera.View; effect.Projection = camera.Projection; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(cubeBuffer); RasterizerState rs = new RasterizerState(); rs.CullMode = CullMode.None; rs.FillMode = FillMode.Solid; device.RasterizerState = rs; device.DrawPrimitives( PrimitiveType.TriangleList, 0, cubeBuffer.VertexCount / 3); } This is the Draw method in game1 A.Draw(camera, effect); B.Draw(camera, effect); **

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • Low complexity shader to indicate the sides of a polyline

    - by Pris
    I have a bunch of polylines that I draw using GL_LINES. They can have thousands of points. They actually represent the separation of land and water on a map. I don't have complete polygons, just the ordered set of points. I'm looking for a neat but efficient way to visually convey Side A and Side B as being different. For example I could offset the polyline in one direction a few times and fade it out (but every offset is doubling the number of points), or offset it once to make a "ribbon" and give one side a 'glow' like effect to mimic the outer glow or shadow of a polygon). This is for a mobile application and I'm using OpenGL ES 2. I'd like to keep the effect as simple as possible from a complexity stand point. I'm looking for some additional ideas; maybe there's a clever shader technique out there or a visual effect I haven't considered.

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • How can I solve the same problems a CB-architecture is trying to solve without using hacks? [on hold]

    - by Jefffrey
    A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place).

    Read the article

  • Physics-based dynamic audio generation in games

    - by alexc
    I wonder if it is possible to generate audio dynamically without any (!) audio assets, using pure mathematics/physics and some input values like material properties and spatial distribution of content in scene space. What I have in mind is something like a scene, with concrete floor, wooden table and glass on it. Now let's assume force pushes the glass towards the edge of table and then the glass falls onto the floor and shatters. The near-realistic glass destruction itself would be possible using voxels and good physics engine, but what about the sound the glass makes while shattering? I believe there is a way to generate that sound, because physics of sound is fairly known these days, but how computationaly costy that would be? Consumer hardware or supercomputers? Do any of you know some good resources/videos of such an experiment?

    Read the article

  • Markup format or script for data files?

    - by Aaron
    The game I'm designing will be mainly written in a high level scripting language (leaning towards either Lua or Squirrel) with a C++ core. In addition to scripts I'm also going to need different data files. Many data files will be for static information such as graphical assets and monster types. I'd also want to create and update data files at runtime for user information like option settings and game saves. Can I get away with using plain script files (i.e. .lua or .nut files) for my data files, or is it better to use dedicated markup formats like XML or YAML? If I use script files, loaded separately from my true scripts, then I wouldn't need an extra library to read those files. Scripting languages like Lua also have table syntax that lend themselves towards data definition. On the other hand I'd have to write my own schema check code. These languages also don't seem to support serialization "out of the box" like the markup format libraries do.

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • Point[] and Tri not "could not be found"

    - by Craig Dannehl
    Hi I'm trying to learn how to load a .obj file using OpenTK in windows Forms. I have seen a lot of examples out there, but I do see almost everyone uses List, and Point[]. Code example show these highlighted like there IDE know what these are; for example List<Tri> tris = new List<Tri>(); but mine just returns "The type or namespace name 'Tri' could not be found" is there an include I need to add or a using I am missing. Currently have this using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL;

    Read the article

  • sprite animation system height recalculating has some issues

    - by Nicolas Martel
    Basically, the way it works is that it update the frame to show every let's say 24 ticks and every time the frame update, it recalculates the height and width of the new sprite to render so that my gravity logics and stuff works well. But the problem i am having now is a bit hard to explain in words only, therefore i will use this picture to assist me The picture So what i need basically is that if let's say i froze the sprite at the first frame, then unfreeze it and freeze it at the second frame, have the second frame's sprite (let's say it's a prone move) simply stand on the foothold without starting the gravity and when switching back, have the first sprite go back on the foothold like normal without being under the foothold. I had 2 ideas on doing this but I'm not sure it's the most efficient ways to do it so i wanna hear your inputs.

    Read the article

  • Collision Detection with SAT: False Collision for Diagonal Movement Towards Vertical Tile-Walls?

    - by Macks
    Edit: Problem solved! Big thanks to Jonathan who pointed me in the right direction. Sean describes the method I used in a different thread. Also big thanks to him! :) Here is how I solved my problem: If a collision is registered by my SAT-method, only fire the collision-event on my character if there are no neighbouring solid tiles in the direction of the returned minimum translation vector. I'm developing my first tile-based 2D-game with Javascript. To learn the basics, I decided to write my own "game engine". I have successfully implemented collision detection using the separating axis theorem, but I've run into a problem that I can't quite wrap my head around. If I press the [up] and [left] arrow-keys simultaneously, my character moves diagonally towards the upper left. If he hits a horizontal wall, he'll just keep moving in x-direction. The same goes for [up] and [left] as well as downward-diagonal movements, it works as intended: http://i.stack.imgur.com/aiZjI.png Diagonal movement works fine for horizontal walls, for both left and right-movement However: this does not work for vertical walls. Instead of keeping movement in y-direction, he'll just stop as soon as he "enters" a new tile on the y-axis. So for some reason SAT thinks my character is colliding vertically with tiles from vertical walls: http://i.stack.imgur.com/XBEKR.png My character stops because he thinks that he is colliding vertically with tiles from the wall on the right. This only occurs, when: Moving into top-right direction towards the right wall Moving into top-left direction towards the left wall Bottom-right and bottom-left movement work: the character keeps moving in y-direction as intended. Is this inherited from the way SAT works or is there a problem with my implementation? What can I do to solve my problem? Oh yeah, my character is displayed as a circle but he's actually a rectangular polygon for the collision detection. Thank you very much for your help.

    Read the article

  • OpenGL vs DirectX?

    - by Harold
    I saw the articles that were going on about OpenGL being better than DirectX and that Microsoft are really just trying to get everyone to use DirectX even though it's inferior so that gaming is almost exclusively for Windows and XBox, but since the article was written in 2006 is it still relevant today? Also I know plenty of games are written in DirectX but does anyone have any examples of popular games written in OpenGL? Thanks

    Read the article

  • Units issue when exporting from 3DS Max to XNA

    - by miguelSantirso
    I am working on a XNA game where we have defined that 1 XNA unit equals to 1 meter. Then, I set meters as system unit in 3DS Max and set to meters the units in the FBX exporter. However, when I export my models, they are much bigger in the game. Am I missing something? What should I do to avoid problems with my units? Investigating the FBX file, I noticed that I it has two values called UnitScaleFactor and OriginalUnitScaleFactor. They both are 100 when I export the files... And if I manually change UnitScaleFactor to 1, it works fine :S

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Why do we use Pythagoras in game physics?

    - by Starkers
    I've recently learned that we use Pythagoras a lot in our physics calculations and I'm afraid I don't really get the point. Here's an example from a book to make sure an object doesn't travel faster than a MAXIMUM_VELOCITY constant in the horizontal plane: MAXIMUM_VELOCITY = <any number>; SQUARED_MAXIMUM_VELOCITY = MAXIMUM_VELOCITY * MAXIMUM_VELOCITY; function animate(){ var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; x_velocity = x_velocity / scalar; z_velocity = x_velocity / scalar; } } Let's try this with some numbers: An object is attempting to move 5 units in x and 5 units in z. It should only be able to move 5 units horizontally in total! MAXIMUM_VELOCITY = 5; SQUARED_MAXIMUM_VELOCITY = 5 * 5; SQUARED_MAXIMUM_VELOCITY = 25; function animate(){ var x_velocity = 5; var z_velocity = 5; var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); var squared_horizontal_velocity = 5 * 5 + 5 * 5; var squared_horizontal_velocity = 25 + 25; var squared_horizontal_velocity = 50; // if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ if( 50 <= 25 ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; scalar = 50 / 25; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Now this works well, but we can do the same thing without Pythagoras: MAXIMUM_VELOCITY = 5; function animate(){ var x_velocity = 5; var z_velocity = 5; var horizontal_velocity = x_velocity + z_velocity; var horizontal_velocity = 5 + 5; var horizontal_velocity = 10; // if( horizontal_velocity >= MAXIMUM_VELOCITY ){ if( 10 >= 5 ){ scalar = horizontal_velocity / MAXIMUM_VELOCITY; scalar = 10 / 5; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Benefits of doing it without Pythagoras: Less lines Within those lines, it's easier to read what's going on ...and it takes less time to compute, as there are less multiplications Seems to me like computers and humans get a better deal without Pythagoras! However, I'm sure I'm wrong as I've seen Pythagoras' theorem in a number of reputable places, so I'd like someone to explain me the benefit of using Pythagoras to a maths newbie. Does this have anything to do with unit vectors? To me a unit vector is when we normalize a vector and turn it into a fraction. We do this by dividing the vector by a larger constant. I'm not sure what constant it is. The total size of the graph? Anyway, because it's a fraction, I take it, a unit vector is basically a graph that can fit inside a 3D grid with the x-axis running from -1 to 1, z-axis running from -1 to 1, and the y-axis running from -1 to 1. That's literally everything I know about unit vectors... not much :P And I fail to see their usefulness. Also, we're not really creating a unit vector in the above examples. Should I be determining the scalar like this: // a mathematical work-around of my own invention. There may be a cleverer way to do this! I've also made up my own terms such as 'divisive_scalar' so don't bother googling var divisive_scalar = (squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY); var divisive_scalar = ( 50 / 25 ); var divisive_scalar = 2; var multiplicative_scalar = (divisive_scalar / (2*divisive_scalar)); var multiplicative_scalar = (2 / (2*2)); var multiplicative_scalar = (2 / 4); var multiplicative_scalar = 0.5; x_velocity = x_velocity * multiplicative_scalar x_velocity = 5 * 0.5 x_velocity = 2.5 Again, I can't see why this is better, but it's more "unit-vector-y" because the multiplicative_scalar is a unit_vector? As you can see, I use words such as "unit-vector-y" so I'm really not a maths whiz! Also aware that unit vectors might have nothing to do with Pythagoras so ignore all of this if I'm barking up the wrong tree. I'm a very visual person (3D modeller and concept artist by trade!) and I find diagrams and graphs really, really helpful so as many as humanely possible please!

    Read the article

  • RPG Monster-Area, Spawn, Loot table Design

    - by daemonfire300
    I currently struggle with creating the database structure for my RPG. I got so far: tables: area (id) monster (id, area.id, monster.id, hp, attack, defense, name) item (id, some other values) loot (id = monster.id, item = item.id, chance) spawn (id = area.id, monster = monster.id, count) It is a browser-based game like e.g. Castle Age. The player can move from area to area. If a player enters an area the system spawns, based on the area.id and using the spawn table data, new monsters into the monster table. If a player kills a monster, the system picks the monster.id looks up the items via the the loot table and adds those items to the player's inventory. First, is this smart? Second, I need some kind of "monster_instance"-table and "area_instance"-table, since each player enters his very own "area" and does damage to his very own "monsters". Another approach would be adding the / a player.id to the monster table, so each monster spawned, has it's own "player", but I still need to assign them to an area, and I think this would overload the monster table if I put in the player.id and the area.id into the monster table. What are your thoughts? Temporary Solution monster (id, attackDamage, defense, hp, exp, etc.) monster_instance (id, player.id, area_instance.id, hp, attackDamage, defense, monster.id, etc.) area (id, name, area.id access, restriction) area_instance (id, area.id, last_visited) spawn (id, area.id, monster.id) loot (id, monster.id, chance, amount, ?area.id?) An example system-flow would be: Player enters area 1: system creates area_instance of type area.id = 1 and sets player.location to area.id = 1 If Player wants to battle monsters in the current area: system fetches all spawn entries matching area.id == player.location and creates a new monster_instance for each spawn by fetching the according monster-base data from table monster. If a monster is fetched more than once it may be cached. If Player actually attacks a monster: system updates the according monster_instance, if monster dies the instance if removed after creating the loot If Player leaves the area: area_instance.last_visited is set to NOW(), if player doesn't return to data area within a certain amount of time area_instance including all its monster_instances are deleted.

    Read the article

  • Remove gravity from single body

    - by Siddharth
    I have multiple bodies in my game world in andengine. All the bodies affected by gravity but in that I want my specific body does not affected by the gravity. For that solution after research I found that I have to use body.setGravityScale(0) method for my problem solution. But in my andengine extension I don't found that method so please provide guidance about how get access about that method. Also for the above problem any other guidance will be acceptable. Thank You! I apply following code for reverse gravity final Vector2 vec = new Vector2(0, -SensorManager.GRAVITY_EARTH * bulletBody.getMass()); bulletBody.applyForce(vec, bulletBody.getWorldCenter());

    Read the article

  • efficient collision detection - tile based html5/javascript game

    - by Tom Burman
    Im building a basic rpg game and onto collisions/pickups etc now. Its tile based and im using html5 and javascript. i use a 2d array to create my tilemap. Im currently using a switch statement for whatever key has been pressed to move the player, inside the switch statement. I have if statements to stop the player going off the edge of the map and viewport and also if they player is about to land on a tile with tileID 3 then the player stops. Here is the statement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerX > 0) { playerX--; } if(board[playerX][playerY] == 3){ playerX++; } break; case 38: // Up if (playerY > 0) playerY--; if(board[playerX][playerY] == 3){ playerY++; } break; case 39: // Right if (playerX < worldWidth) { playerX++; } if(board[playerX][playerY] == 3){ playerX--; } break; case 40: // Down if (playerY < worldHeight) playerY++; if(board[playerX][playerY] == 3){ playerY--; } break; } viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; }, false); My question is, is there a more efficient way of handling collisions, then loads of if statements for each key? The reason i ask is because i plan on having many items that the player will need to be able to pickup or not walk through like walls cliffs etc. Thanks for your time and help Tom

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • Using a particular font in a commercial game

    - by RCIX
    I'm working on a game I intend to sell, and I want to use this font. The license says: "You may NOT copy or distribute the font outside of the licensed household, company, school or institution. Please ask external contacts who want to use the font to purchase their own license at www.CheapProFonts.com." However, my plans are to use a tool to output a texture using this font to use as a bitmap font in my game. Does this mean I can do so, and sell my game with the font in it?

    Read the article

  • Simple collision detection in Unity 2D

    - by N1ghtshade3
    I realise other posts exist with this topic yet none have gone into enough detail for me. I am attempting to create a 2D game in Unity using C# as my scripting language. Basically I have two objects, player and bomb. Both were created simply by dragging the respective PNG to the stage. I have set up touch controls to move player left and right; gravity of any kind is not needed as I only require it to move x units when I tap either the left or right side of the screen. This movement is stored in a script called playerController.cs and works just fine. I also have a variable health = 3 for player, which is stored in healthScript.cs. I am now at a point where I am stuck. I would like it so that when player collides with bomb, health decreases by one and the bomb object is destroyed. So what I tried doing is using a new script called playerPhysics.cs, I added the following: void OnCollisionEnter2D(Collision2D coll){ if(coll.gameObject.name=="bomb") GameObject.Destroy("bomb"); healthScript.health -= 1; } While I'm fairly sure I don't know the proper way to reference a variable in another script and that's why the health didn't decrease when I collided, bomb never disappeared from the stage so I'm thinking there's also a problem with my collision. Initially, I had simply attached playerPhysics.cs to player. After searching around though, it appeared as though player also needed a rigidBody attached to it, so I did that. Still no luck. I tried using a circleCollider (player is a circle), using a rigidBody2D, and using all manner of colliders on one and/or both of the objects. If you could please explain what colliders (if any) should be attached to which objects and whether I need to change my script(s), that would be much more helpful than pointing me to one of the generic documentation examples I've already read. Also, if it would be simple to fix the health thing not working that would be an added bonus but not exactly the focus of this question. Bear in mind that this game is 2D; I'm not sure if that changes anything. Thanks!

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >