Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 501/1039 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • A* how make natural look path?

    - by user11177
    I've been reading this: http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html But there are some things I don't understand, for example the article says to use something like this for pathfinding with diagonal movement: function heuristic(node) = dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) I don't know how do set D to get a natural looking path like in the article, I set D to the lowest cost between adjacent squares like it said, and I don't know what they meant by the stuff about the heuristic should be 4*D, that does not seem to change any thing. This is my heuristic function and move function: def heuristic(self, node, goal): D = 10 dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) def move_cost(self, current, node): cross = abs(current.x - node.x) == 1 and abs(current.y - node.y) == 1 return 19 if cross else 10 Result: The smooth sailing path we want to happen: The rest of my code: http://pastebin.com/TL2cEkeX

    Read the article

  • Detecting tile with height in isometric game

    - by Carlos Navarro
    I'm trying to create an isometric tile-based game (for iPhone) and I'm having trouble with height in tiles. What I currently do (without heights) is apply some mathematic transformations to my 2D-matrix (which represent the tiles) so that I know where in the screen (x,y) should I place the isometric tile. Then, when the user clicks somewhere in the screen, I take that values and pass them through a function (kind of f^-1) to get which tile it belongs to. This works perfectly. My problem is: imagine that I want some tiles to have a different height from others. In order to draw the tile itself its pretty simple, since the z-coordinate has no transformation in the isometric approach used in games (z'=z). BUT what if I want to calculate the tile coordinate (defined by X-tile and Y-tile) from the touch coordinates (x,y)? Any guess?

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • Implement Fast Inverse Square Root in Javascript?

    - by BBz
    The Fast Inverse Square Root from Quake III seems to use a floating-point trick. As I understand, floating-point representation can have some different implementations. So is it possible to implement the Fast Inverse Square Root in Javascript? Would it return the same result? float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; i = 0x5f3759df - ( i >> 1 ); y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; }

    Read the article

  • Cocos2d rotating sprite while moving with CCBezierBy

    - by marcg11
    I've done my moving actions which consists of sequences of CCBezierBy. However I would like the sprite to rotate by following the direction of the movement (like an airplane). How sould I do this with cocos2d? I've done the following to test this out. CCSprite *green = [CCSprite spriteWithFile:@"enemy_green.png"]; [green setPosition:ccp(50, 160)]; [self addChild:green]; ccBezierConfig bezier; bezier.controlPoint_1 = ccp(100, 200); bezier.controlPoint_2 = ccp(400, 200); bezier.endPosition = ccp(300,160); [green runAction:[CCAutoBezier actionWithDuration:4.0 bezier:bezier]]; In my subclass: @interface CCAutoBezier : CCBezierBy @end @implementation CCAutoBezier - (id)init { self = [super init]; if (self) { // Initialization code here. } return self; } -(void) update:(ccTime) t { CGPoint oldpos=[self.target position]; [super update:t]; CGPoint newpos=[self.target position]; float angle = atan2(newpos.y - oldpos.y, newpos.x - oldpos.x); [self.target setRotation: angle]; } @end However it rotating, but not following the path...

    Read the article

  • Random Position between ranges.

    - by blakey87
    Does anyone have a good algorithm for generating a random y position for spawning a block, which takes into account a minimum and maximum height, allowing player to to jump on the block. Blocks will continually be spawned, so the player must always be able to jump onto the next block, bearing in mind the minimum position which would be the ground, and the maximum which would the players jump height bearing in mind the ceiling

    Read the article

  • Handling player/background movements in 2D games

    - by lukeluke
    Suppose you have your animated character controlled by the player and a 2D world (like the old 2D side-scrolling games). When the user press right on the keyboard, the background is moved to the right. If the path is always horizontal, this is simple to do (incrementation/decrementation of the x-coordinate). But suppose that the path is instead a polygonal chain. My questions are: How do you move the background? How do you move the background if the game objects are managed with a physics engine like box2D?

    Read the article

  • The most efficent ways for drawing lines all day long with OpenGL

    - by nkint
    I'd like to put a computer screen that is running an OpenGL programs in a room. It has to run all day long (not in the night). I'd like to draw lines that are slowly fading in the background. The setting is simple: a uniform color background (say, black) and colored lines (say, white) that are slowly fading out. With slowly I mean.. hours. Say that the first line I draw is with alpha 255 (fully visible), after one hours is 240. After 10 hours is 105. One line could have 250 points and there will be like 300 line in one day. For now I have done a prototype with very rudimentary method like: glBegin( GL_LINE_STRIP ); iterator = point_list.begin(); for (++iterator, end = point_list.end(); iterator != end; ++iterator) { const Vec3D &v = *iterator; glVertex2f(v.x(), v.y()); } glEnd(); More efficient method?

    Read the article

  • String Conversion to Char for Java Game

    - by Jen
    Is there someone who could help me achieve the following points on this problems? I can't seem to get it. I tried using toCharArray and Scanner to achieve this, but it doesn't work, nor do I know how to make this things possible for my word game. :( · Get a popular name of person, place, verse, saying or event from the user. This may have a single or multiple words in it. · Create a copy of this string to an array where each letter is replaced with a hyphen (-) and each space is replaced with an underscore (_). Symbols and numbers will remain shown. · The program then asks a letter from the user. If the letter is in the inputted string, then it should be shown on the array at the same position it is shown in the string. Meaning, the letter replaces the hyphen (-) at the correct position of the array. · The program again prompts for a letter from the user and replaces the hyphen (-) of the array if it exists on the inputted string. This will be repeatedly done until such time each hyphen (-) is replaced with the correct letter. · If the user inputs an invalid letter, that is, a letter that does not exist on the inputted string, then the program should inform the user. If this happens 3 times while there is still at least one hyphen on the array, then the program should inform the user that he lost the game and showing him the whole correct string. · If the user completes the game, meaning, all hyphens have been replaced with the correct letters; then the program should congratulate the user for a job well done.

    Read the article

  • Selection of a mesh with arbitrary region

    - by Tigran
    Considering example: I have a mesh(es) on the OpenGL screen and would like to select a part of it (say for delete purpose). There is a clear way to do the selction via Ray Tracing, or via Selection provided by OpenGL itself. But, for my users, considering that meshes can get wired surfaces, I need to implement a selection via a Arbitrary closed region, so all triangles that appears present inside that region has to be selected. To be more clear, here is screen shot: I want all triangles inside black polygon to be selected, identified, whatever in some way. How can I achieve that ?

    Read the article

  • How much to pay for artwork in an indie game?

    - by f20k
    I am an indie developer and I need some detailed artwork. How much is reasonable to pay an artist for say 20 character designs? I know it depends on the artist's skills, etc, but I am wondering what to expect so that I can budget it. Edit: Let's say cartoon-ish art (Example - not necessarily in that level of detail but that kind of cartoony-art style). No 3-d modelling - The art will be used as still images in game and for promotional reasons. I'd provide a base sprite design for them to expand on and detail. Also, some numbers would be nice - I like numbers. Even a range is helpful. Like: expect to spend $x2 ~ $x1 for top-notch and $y2 ~ $y1 for decent quality. I understand I can ask at some indie-help site but, if an artist says something like $1000 for 20 designs, I wouldn't have any idea if it's reasonable / good deal / bad idea etc.

    Read the article

  • Rotation matrix for a 3D vector

    - by Shashwat
    I have a direction vector on which I have to apply some rotation to align it to positive z-axis. To use Matrix.CreateRotationX(angle) of XNA, I need the angle for which I'd have to compute cos or tan inverse. I think this is a complex task to do. Also, eventually those are also converted to sin(angle) and cos(angle) in the matrix. Is there any inbuilt way to create rotation matrix from a 3D vector? However, I can write the function but still asking if there is one already there.

    Read the article

  • PhysX Capsule Character Controller floating above ground

    - by Jannie
    I am using PhysX Version 3.0.2 in the simulation package I'm working on, and I've encountered some bizarre behavior with the capsule character controller. When I set the controller's height and radius to the appropriate values (r = 0.25, h = 1.86)it behaves correctly (moving along the ground, colliding with other objects, and so on) except that the capsule itself is floating above the ground. The actor will then bump his head when trying to get through a door, since the capsule is the correct height but also floating above the ground. This image should illustrate what I'm going on about: One can clearly see that the rest of the scene has their collision bodies wrapped correctly, it's just the capsule that's going wrong! The stop-gap I've implemented is creating a smaller capsule and giving it an offset, but I need to implement ray-picking for the controller next so the capsule has to surround the character model properly. Here's my character creation code (with height = 1.86f and radius = 0.25f): NxController* D3DPhysXManager::CreateCharacterController( std::string l_stdsControllerName, float l_fHeight, float l_fRadius, D3DXVECTOR3 l_v3Position ) { NxCapsuleControllerDesc l_CapsuleControllerDescription; l_CapsuleControllerDescription.height = l_fHeight; l_CapsuleControllerDescription.radius = l_fRadius; l_CapsuleControllerDescription.position.set( l_v3Position.x, l_v3Position.y, l_v3Position.z ); l_CapsuleControllerDescription.callback = &this->m_ControllerHitReport; NxController* l_pController = this->m_pControllerManager->createController( this->m_pScene, l_CapsuleControllerDescription ); this->m_pControllerMap.insert( l_ControllerValuePair( l_stdsControllerName, l_pController ) ); return l_pController; } Any help at all would be appreciated, I just can't figure this one out! P.S. I've found a couple of (rather old) threads describing the same issue, but it seems they couldn't find a solution either. Here are the links: http://forum-archive.developer.nvidia.com/index.php?showtopic=6409 http://forum-archive.developer.nvidia.com/index.php?showtopic=3272 http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=23003

    Read the article

  • Scrolling Box2D DebugDraw

    - by onedayitwillmake
    I'm developing a game using Box2D (javascript implementation - Box2DWeb), and I would like to know how I can pan the debug draw. I know the usual answer is - don't use debug draw, it's just for debugging. I'm not, however not all my objects are on the same screen, and i'd like to see where they are in the physics representation. How can I pan the debug drawing? As you can see the debug draw stuff, is show on the top left, but it only shows a small part of the world. Here is an example of what I mean: http://onedayitwillmake.com/ChuClone/ The game is open source, If you'd like to poke through and note something that perhaps i'm doing something that is obviously wrong: https://github.com/onedayitwillmake/ChuClone

    Read the article

  • How do you maintain content size vs. content quality in an application?

    - by PeterK
    I am developing my first Cocos2d iPhone/iPad game that includes quite a few sprites, I would need approximately 80 different. As this is for both normal and HD displays I have 2x of each sprite. I am using TexturePacker to optimize the thing. I would like to ask if there are any rules-of-thumb, tricks, ideas etc. to adjust to in regards to size of content, quality and how you maintain high-quality HD-based graphics due to its size vs. the device memory sizes? Also, is it a good idea to only have one copy of the sprites and scale it using code?

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • How can I access bitmaps created in another activity?

    - by user22241
    I am currently loading my game bitmaps when the user presses 'start' in my animated splash screen activity (the first / launch activity) and the app progresses from my this activity to the main game activity, This is causing choppy animation in the splashscreen while it loads/creates the bitmaps for the new activity. I've been told that I should load all my bitmaps in one go at the very beginning. However, I can't work out how to do this - could anyone please point me in the right direction? I have 2 activities, a splash screen and the main game. Each consist of a class that extends activity and a class that extends SurfaceView (with an inner class for the rendering / logic updating). So, for example at the moment I am creating my bitmaps in the constructor of my SurfaceView class like so: public class OptionsScreen extends SurfaceView implements SurfaceHolder.Callback { //Create variables here public OptionsScreen(Context context) { Create bitmaps here } public void intialise(){ //This method is called from onCreate() of corresponding application context // Create scaled bitmaps here (from bitmaps previously created) }

    Read the article

  • Avatar creation / dressing feature

    - by milesmeow
    What is the effort required to use a game engine such as Unreal or Unity, etc. and create an avatar customization features...complete with clothes. The user should be able to customize the body features and the clothes need to then fit onto the customized body. What is needed? Can you create one set of 3D models for clothes and somehow programatically have the clothes adapt to the body shape? I.e. The same shirt model will be able to fit on a skinny person vs. someone with a big beer belly. How difficult is this? What are the steps needed to implement this avatar creation/dressing feature. I'm basically talking about something like in Rockband 3.

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • how to keep display tick rate steady when using continuous collision detection?

    - by nas Ns
    (I've just found about this forum). I hope it is ok to repost my question again here. I posted this question at stackoverflow, but it looks like I might get better help here. Here is the question: I've implemented basic particles motion simulation with continuous collision detection. But there is small issue in display. Assume simple case of circles moving inside square. All elastic collisions. no firction. All motion is constant speed. No forces are involved, no gravity. So when a particle is moving, it is always moving at constant speed (in between collisions) What I do now is this: Let the simulation time step be 1 second (for example). This is the time step simulation is advanced before displaying the new state (unless there is a collision sooner than this). At start of each time step, time for the next collision between any particles or a particle with a wall is determined. Call this the TOC time; let’s say TOC was .5 seconds in this case. Since TOC is smaller than the standard time step, then the system is moved by TOC and the new system is displayed so that the new display shows any collisions as just taking place (say 2 circles just touched each other’s, or a circle just touched a wall) Next, the collision(s) are resolved (i.e. speeds updated, changed directions etc..). A new step is started. The same thing happens. Now assume there is no collision detected within the next 1 second (those 2 circles above will not be in collision any more, even though they are still touching, due to their speeds showing they are moving apart now), Hence, simulation time is advanced now by the full one second, the standard time step, and particles are moved on the screen using 1 second simulation time and new display is shown. You see what has just happened: One frame ran for .5 seconds, but the next frame runs for 1 second, may be the 3rd frame is displayed after 2 seconds, may be the 4th frame is displayed after 2.8 seconds (because TOC was .8 seconds then) and so on. What happens is that the motion of a particle on the screen appears to speed up or slow down, even though it is moving at constant speed and was not even involved in a collision. i.e. Looking at one particle on its own, I see it suddenly speeding up or slowing down, becuase another particle had hit a wall. This is because the display tick is not uniform. i.e. the frame rate update is changing, giving the false illusion that a particle is moving at non-constant speed while in fact it is moving at constant speed. The motion on the screen is not smooth, since the screen is not updating at constant rate. I am not able to figure how to fix this. If I want to show 2 particles at the moment of the collision, I must draw the screen at different times. Drawing the screen always at the same tick interval, results in seeing 2 particles before the collision, and then after the collision, and not just when they colliding, which looked bad when I tried it. So, how do real games handle this issue? How to display things in order to show collisions when it happen, yet keep the display tick constant? These 2 requirements seem to contradict each other’s.

    Read the article

  • Wired : "Japanese despise western games" - how much is it true?

    - by user712092
    Wired has article that western games are seen as "bad games". Why is that? If that is true, than what do they see as bad? This article, i think, is biased towards western games: “The other day,” says Q Entertainment’s Mielke, “I was having lunch with a friend and I said, ‘Have you ever played StarCraft?’ And he said, ‘What’s StarCraft?’ Sometimes it’s just really shocking that their gaming vocabulary isn’t as extensive as it could be. I think Japanese game developers need to start playing other people’s games to open their minds, just like a writer might want to read classic literature to be inspired.” Rather than trusting such articles, I rather ask here.

    Read the article

  • Has a multi player graphic adventure* ever been made?

    - by Petruza
    By graphic adventure, I mean point & click LucasArts-type games. Those games have a mostly linear structure in nature, and usually don't offer as many variants as other games types like action, rpg, strategy, which makes this genre difficult to implement a multi-player feature. I'd like to know if there has been any attempts on doing such a thing, and if it would be viable, as players going offline or leaving a game in the middle would affect significantly the other players' game.

    Read the article

  • SDL 2.0: is there a library to create 2D particle effects rapidly?

    - by mm24
    I would like to create an light/explosion particle effect using some in built library. I am used to Cocos2D where there are specific classes that you can simply initialize in a certain position and producing a certain particle effect. Is there a way to do so in SDL 2.0 C++? I have found this tutorial but it seems to go for a "build it yoursefl" solution, which is ok but I do not want to re-invent the wheel if someone else has already built it.

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >