Search Results

Search found 26093 results on 1044 pages for 'career development'.

Page 553/1044 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • How can I keep track of a battle log on a web game?

    - by Jay W
    Recently I started working on a Web turn-based PvP RPG game. Now I'm working on the battle system but I encountered some issues: How can I keep track of everything that happens in the battle? It should keep track of the characters on the field, inventory, the damage done etc. I first thought I would simply put it in the (MySQL) database, but I think it will be too much. Especially if several people are in a battle. I thought of puting this in sessions or cookies but I don't think thats reliable. Does anyone have an idea how I can do this?

    Read the article

  • How to load data for specific level at runtime?

    - by Siddharth
    I'm trying to create a game with many levels loaded from XML files. In my game I have many objects in each level. At present my game contains 20 levels, and I load all the textures at once on startup. But I think the correct way to do it is to only load textures used in the current level. I don't know how to do that. So please explain this by providing some example code. At present I create a class for each type of entity by extending my Sprite class. This subclass loads the appropriate image. I know this is not the best way to do things. Basically I want to know how to load large levels efficiently in Andengine. What is the proper method for loading textures, level data and background images from files when the level is run?

    Read the article

  • Collision representation in game overworlds

    - by Akroy
    I'm implementing a 2D overworld where one can walk through an area that is not tile based. I was wondering the best way to implement collisions. In the past when I've done similar things, I've used one image (or set of images) to show an elaborately drawn world and then a second binary image that does nothing but differentiate "wall" and "not wall". Then, I'd use the first for all drawing to the screen, but the second for collision detection. Having another image of the same size to represent collisions seems like lots of overhead. Is there a better way to handle this? (I'm currently using C++ with SDL, although I'm more interested in general concepts)

    Read the article

  • Why circles are not created if small?

    - by Suzan Cioc
    I have changed the scale to my own and now I cant create any object, including circle, if it is of the size which is normal for my scale. I am to create big object first and then modify it to smaller size. Looks like minimal size protection is set somewhere. Where? UPDATE While creating a circle, if I drag for 0.04m circle disappears after drag end. If I drag for 0.08m circle also disappears. If I drag for more than 0.1m, circle persists after drag end. How to set so that it persist after 0.01m too?

    Read the article

  • Server costs and back loading for mobile devices

    - by user23844
    A company approached me to design an MMO for the mobile platform and I have the perfect idea for them. My question is how much would a server for a FTP game that has both a PVE element and PVP cost? Also do you think that it would be better or is it even possible to back load the data onto the phones (trying to come up with some interesting way to back up the data in case of emergency). I don't want the game to be totally online reliant (I want to appeal to not only phone users but also iPod touch users) and for there to be an offline mode. If you can't tell this is my first game besides simple projects I've done on the side. Any help would be greatly appreciated.

    Read the article

  • Check if the vector is behind another or maybe opposite directions?

    - by Gilson
    I'm doing a network game and on the client side, i interpolate the client position with the server sent extrapolated position. The client has its own physics simulation wich is corrected by the server in steps. The problem is when it laggs and i 'kick' the ball, the server gets a delayed message and sends me the position backwards of the client position wich makes the ball goes back and forth. I want to ignore those and maybe compensate that on the server, not sure though. The problem is the clock difference on those case are 0.07ms or 0.10 ms wich isn't that high to ignore the message i guess. When i get the server position, i extrapolate with the clock interval * serverBallVelocity Can i check if my new ball server position is behind my actual ball vector position? I tried to use the dot product after normalized the two vectors to check if they are opposite but it ain't working properly. Any suggestions on checking that?

    Read the article

  • Entity communication: Message queue vs Publish/Subscribe vs Signal/Slots

    - by deft_code
    How do game engine entities communicate? Two use cases: How would entity_A send a take-damage message to entity_B? How would entity_A query entity_B's HP? Here's what I've encountered so far: Message queue entity_A creates a take-damage message and posts it to entity_B's message queue. entity_A creates a query-hp message and posts it to entity_B. entity_B in return creates an response-hp message and posts it to entity_A. Publish/Subscribe entity_B subscribes to take-damage messages (possibly with some preemptive filtering so only relevant message are delivered). entity_A produces take-damage message that references entity_B. entity_A subscribes to update-hp messages (possibly filtered). Every frame entity_B broadcasts update-hp messages. Signal/Slots ??? entity_A connects an update-hp slot to entity_B's update-hp signal. Something better? Do I have a correct understanding of how these communication schemes would tie into a game engine's entity system? How do entities in commercial game engines communicate?

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • OGRE 3D: How to create very basic gameworld [on hold]

    - by skiwi
    I'm considering trying around to create an FPS (First person shooter), using the Ogre 3D engine. I have done the Basic Tutorials (except CEGUI), and have read through the Intermediate Tutorial, I understand some of the more advanced concepts, but I'm stuck with very simple concepts. First of all: I would want to use some tiles (square ones, with relative little height) as the floor, I guess I need to set up a loop to get those tiles done. But how would I go about creating those tiles exactly? Like making it to be their own mesh, and then I would need to find some texture. Secondly: I guess I can derive the camera and movement functions from the basic tutorial. But I'll be needing a "soldier" (anything does for now), what is the best way to create a moderately decent looking soldier? (Or obtain a decent one from an open library?) And thirdly: How can I ensure that the soldier is actually walking on the ground, instead of mid air? Will raycasting into the ground + adjust position based on that, suffice?

    Read the article

  • Restrict movement within a radius

    - by Phil
    I asked a similar question recently but now I think I know more about what I really want to know. I can answer my own question if I get to understand this bit. I have a situation where a sprite's center point needs to be constrained within a certain boundary in 2d space. The boundary is circular so the sprite is constrained within a radius. This radius is defined as a distance from the center of a certain point. I know the position of the center point and I can track the center position of the sprite. This is the code to detect the distance: float distance = Vector2.Distance(centerPosition, spritePosition)); if (distance > allowedDistance) { } The positions can be wherever on the grid, they are not described as in between -1 or 1. So basically the detecting code works, it only prints when the sprite is outside of it's boundary I just don't know what to do when it oversteps. Please explain any math used as I really want to understand what you're thinking to be able to elaborate on it myself.

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • Several classes need to access the same data, where should the data be declared?

    - by Juicy
    I have a basic 2D tower defense game in C++. Each map is a separate class which inherits from GameState. The map delegates the logic and drawing code to each object in the game and sets data such as the map path. In pseudo-code the logic section might look something like this: update(): for each creep in creeps: creep.update() for each tower in towers: tower.update() for each missile in missiles: missile.update() The objects (creeps, towers and missiles) are stored in vector-of-pointers. The towers must have access to the vector-of-creeps and the vector-of-missiles to create new missiles and identify targets. The question is: where do I declare the vectors? Should they be members of the Map class, and passed as arguments to the tower.update() function? Or declared globally? Or are there other solutions I'm missing entirely?

    Read the article

  • How can I extract a list of Minecraft items and recipes?

    - by Sean
    I'm designing a robust system for resolving item dependencies in Minecraft and to do so, I need to maintain a database of items and recipes. Right now, this database has to be hand-crafted (no pun intended); I would like to know if it is possible to somehow query the Minecraft jars (or perhaps more realistically, grep through them) to extract this data automatically. How can this be done? The project is currently in Python, but it can still be ported to Java without much fuss at this stage. (For the curious.)

    Read the article

  • Windows Phone XAML and XNA Apps with Game Components

    - by row1
    I am using the Windows Phone Template "Windows Phone XAML and XNA Apps" and targeting Windows Phone 7/8. Most examples show your game inheriting from Microsoft.Xna.Framework.Game and then adding Microsoft.Xna.Framework.GameComponent items to the Components collection. But as my game page inherits from PhoneApplicationPage there isn't a Components collection or a Game property. How can I use GameComponent from within PhoneApplicationPage?

    Read the article

  • Model format for small game

    - by DeadMG
    I'm writing my own small-time game from scratch, and now I'm looking to start creating models. I've been wondering- what is the best model format to use? Given that I will be writing the model loading code myself and using whatever program generates them. Ideally, I'd look for a format that has fairly wide support between modelling programs, so I can pick the one I like most to actually perform the building, and the format itself would be relatively simple to load, rather than having all of the latest features.

    Read the article

  • Drawing application on OpenGL for iOS (iPad)

    - by Alesia
    Some help is needed. I'm developing drawing application on OpenGL (deployment target 4.0) for iOS (iPad). We have 3 drawing tools: pen, marker (with alfa) and eraser. I draw with textures, using blending in orthographic projection. I can't use z-ordering because in this case I have to face a lot of troubles with cutting and erasing. The thing that I need is to make the pen be always on the top. When I first use marker and than pen - it's ok. But if I use pen first and marker over the pen - I can't see pen color under marker. I'd appreciate any help or advice. Thank you veeeeeery much!

    Read the article

  • How do I implement powerups for my Breakout clone?

    - by Eva
    I'm making a simple Breakout clone in Python that will have very many powerups/powerdowns (so far I came up with 26). Some will affect the paddle (paddle missile, two paddles, short paddle, etc.), some will affect the ball (slow ball, destructo-ball, invisible ball, etc.), some will affect the bricks (brick scramble, move up, bricks indestructible, etc.), and some will affect other game aspects (extra life, more points, less points, etc.). I'm pretty sure I have the code to draw the falling powerups and test for collisions with the paddle down, but I'm confused about how to code the effects of the powerups. Since there are very many powerups, it seemed inefficient to add specific methods to each component as done in this tutorial. However, I can't think of an other ways to implement the powerups. I found a page that hints at some way to design powerup behavior using classes, but I'm at a loss for how to do that. (A short example would help.) Please give me a short code example of another way to implement the effects of the powerups.

    Read the article

  • How to reduce the time it takes to load my web game? [closed]

    - by Danial
    I created a puzzle game with Unity and uploaded it to one server. This works fine, but I bought a new server and uploaded my game to it as well. There, the loading time is much longer. These are the servers: http://pinheadsinteractive.com/Mozzie/ (fast) http://operation-mozzie-free.com/ (slow) The Unity files are exactly the same from one server to the next. My client is dissatisfied with the new, slow loading time. So, how can I reduce the time my Unity game takes to load? Even in some cases they faced the problem that they could not load the game at all. For the the moment, I'm using an iframe on the new sever as a workaround, but the issue still remains unsolved.

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • How to design Character,Game Background.,Which Software to use?

    - by TicTech
    Yesterday i Downloaded 4 GB of Videos & ZBrush for Character design but now i realize that i want to make 2D Character and background for my Android Game. so it was a waste for me i don't want to waste my other bandwidth and my Energy on some other Tut. So i am Here for a advice my Question is. 1.Which software to draw and color Character, making Background ? 2.Is there any Video Tut or Ebook for the software to draw Character .i know Basic of Photoshop i learned from Lynda Please help me i am experienced in Android SDK so that's not a prob. for me but in designing i don't know anything any Help will be Really Appreciated Thanks.

    Read the article

  • Fastest approach to 3D animation

    - by HappyFerret
    I'm currently tasked with designing a small HTML5 game. Having done everything by myself so far (3D models, codebase, game design, etc) I'm now at a point where I'm running out of time. I've less than a day to animate and bind everything together. However, that's exactly my problem. I was under the naive impression that everything would be easier if I went with pre-rendered 3D models. However, I didn't consider the most difficult part. Animation. After having spent over an hour trying to figure out messiahStudio, I figured it's time to ask for outside help. Is there any easier solution to 3D animation than 3D rigging? What I'm basically looking for is some sort of tool that allows me to simply grab and move/deform select polygons. It doesn't have to be as life-like and accurate as rigging, just efficient enough. Were the circumstances any different, I might just learn how to rig. But that's sorely out of scope right now. PS:The models were created in Sculptris but are fairly low-poly.

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >