Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 608/1505 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Assigning valid moves on board game

    - by Kunal4536
    I am making a board game in unity 4.3 2d similar to checkers. I have added an empty object to all the points where player can move and added a box collider to each empty object.I attached a click to move script to each player token. Now I want to assign valid moves. e.g. as shown in picture... Players can only move on vertex of each square.Player can only move to adjacent vertex.Thus it can only move from red spot to yellow and cannot move to blue spot.There is another condition which is : if there is the token of another player at the yellow spot then the player cannot move to that spot. Instead it will have to go from red to green spot. How can I find the valid moves of the player by scripting. I have another problem with click to move. When I click all the objects move to that position.But I only want to move a single token. So what can i add to script to select a specific object and then click to move the specific object.Here is my script for click to move. var obj:Transform; private var hitPoint : Vector3; private var move: boolean = false; private var startTime:float; var speed = 1; function Update () { if(Input.GetKeyDown(KeyCode.Mouse0)) { var hit : RaycastHit; // no point storing this really var ray = Camera.main.ScreenPointToRay (Input.mousePosition); if (Physics.Raycast (ray, hit, 10000)) { hitPoint = hit.point; move = true; startTime = Time.time; } } if(move) { obj.position = Vector3.Lerp(obj.position, hitPoint, Time.deltaTime * speed); if(obj.position == hitPoint) { move = false; } } }`

    Read the article

  • Blending transition in cocos2d

    - by fiddler
    In my cocos2d-iphone game, I have 2 backgrounds (CCnodes), each containing a quite complex hierarchy of sprites. I would like to make a smooth transition between them: initially, only the first background is visible at the end, only the second one is visible Is there a good way to set the opacity of a full hierarchy of sprites ? I tried to recursively set the opacity of all the contained sprites. It kinda works except that: i guess it's not very efficient i would like the opacity of overlapping sprites to be 'merged' (as if the background was one single big sprite)

    Read the article

  • Music for Kids Game!

    - by Dane
    I'm developing a Multimedia Software for Kindergarten Kids. It introduce them to animals, Alphabets, Simple Math, Colors and it contain some simple games. Music is very crucial for my project and it is very important to choose the right sort of music for different sections. But unfortunately I know nothing about music. Is there a music consultant firm which can help me to choose melodies and rythmes for my project from free music available in internet. My Budget is limited but as this is mandatory and I have no knowledge or taste about music, I think I can afford to pay for this.

    Read the article

  • How to get a point to the left/right of a vector

    - by MulletDevil
    I have a position vector of a point in space and a quaternion for it's rotation. What i'm trying to calculate is a point too the left and a point to the right. I have the position and rotation(quaternion) of the red dot. What I want is to get the position of the green dots. I have a float value for the distance I want these points to be. With only the position and rotation is it possible to get a unit direction vector pointing left/right which I can multiply by my float value? Edit: I also know the original direction vector.

    Read the article

  • Programming bots in games

    - by Bane
    I'm interested in how bots are usually written. Here's my situation: I plan to make an online 2D mecha game in HTML5, and the server-side will be done with node. It is intended to be multiplayer, but I also want to make bots in case there aren't enough players. How does my game logic see them, as players or as bots? Is there a standard by which I should make them? Also, any general tips and hints will be OK.

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • Index Check and Correct Character Display in a Console Hangman Game for Java

    - by Jen
    I have this problem wherein, I can not display the correct characters given by the character. Here's what I meant: String words, in; String replaced_words; Scanner s = new Scanner (System.in); System.out.println("Enter a line of words basing on an event, verse, place or a name of a person."); words = s.nextLine(); System.out.println("The words you just placed are now accepted."); //using char array method, we tried to place the words into a characters array. char [] c = words.toCharArray(); // we need to replace the replaced_words = words.replace(' ', '_').replaceAll("[^\\-]", "-"); for (int i = 0; i < replaced_words.length(); i++) { System.out.print(replaced_words.charAt(i) + " "); } System.out.println("Now, please input a character, guessing the words you just placed."); in = s.nextLine(); in that code, want that the user, when types a word (or should it be character?), any of the correct character the user inputs will be displayed, and changes the hyphen to it...(more like the hangman series of games). How can I achieve this?

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Isometric smooth fog

    - by marcg11
    I'm working on a simple 2d game with direct3d 9. It's a isometric game with diamond tiles and a staggered map. This is what I have: As you se I have some king of fog which is acomplished by having a fog matrix which is true (clear terrain) or false (obscure terran). But the result is very chunky. The fog moves as the player moves by tiles but not by pixels. Basically I check for every tile if there is fog, if so I just change the color of that tile: if(scene->fog[i+mapx][j+mapy] == FOG_NONE) { tile_color = 0x666666FF; } I also would like the fog to be smoother, for that I followed this "tutorial" but I haven't managed to work it it out. http://www.appsizematters.com/2010/07/how-to-implement-a-fog-of-war-part-2-smooth/

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Splitting a texture atlas into seperate images

    - by bigtunacan
    I'm doing a port of an existing game and the designer no longer has all of the original art; he only has the resulting texture atlases he used when developing for iPad. The tool I'm using won't support these files so I need to break them back out into separate PNG files. I'm hoping someone knows of a software tool that does this. PC software would be preferred in this case, but Mac would suffice.

    Read the article

  • How do you set up PhysFS for use in a game?

    - by ThePlan
    After my recent question on GD I've been advised to use PhysFS to pack all my game data in 1 file. So I have, and the decission wasn't light, because I've tried out every library in my answers but none contained a single good tutorial whatsoever, in fact PhysFS is the poorest documented library I've ever seen. After attempting to set up PhysFS in my game I realized it's not as simple as adding the headers to the project, it appears something much more complicated, in fact after my first attempt to install PhysFS the compiler ran out of memory to display errors, it reached the critical count of 50 errors. So basically what I'm asking here is: How can I set up PhysFS on my game? I'm using Code::Blocks IDE on Windows XP SP3;

    Read the article

  • Undeclared Scope in Rock Paper Scissors Simple Game

    - by Rianelle
    #include <iostream> #include <string> #include <cstdlib> #include <ctime> using namespace std; bool win; int winnings; int draws; int loses; string comChoice; string playerChoice; void winGame () { cout << "You won! Play again?" <<endl; cout << "Type y/n" <<endl; char x; cin >> x; if (x == 'y') { beginGame(); } else if ('n'){ cout << "Game Stopped." <<endl; cout << "Number of Draws: " <<draws << endl; cout << "Number of Loses: " <<loses << endl; cout << "Number of Wins: " << winnings << endl; win = true; } } void drawGame (){ ++draws; cout << "Draw! Try again" << endl; return; } void lose () { cout << "You lose! Try again?" <<endl; cout << "Type y/n" <<endl; char feedback; cin >> feedback; if (feedback == 'y') { beginGame(); } else if ('n'){ cout << "Game Stopped." <<endl; cout << "Number of Draws: " <<draws << endl; cout << "Number of Loses: " <<loses << endl; cout << "Number of Wins: " << winnings << endl; } } void beginGame() { cout << "Welcome to the Rock, Paper and Scissors Game!" <<endl; cout << "Let's begin. Type <rock, paper, scissors> for your choice!" <<endl; cin >> playerChoice; srand(time(0)); int randomizer = 1+(rand()%3); if (randomizer == 1) comChoice = "rock"; if (randomizer == 2) comChoice = "paper"; if (randomizer == 3) comChoice = "scissors"; do { if (playerChoice == comChoice) { drawGame(); } if (playerChoice == "rock" && comChoice == "paper") ++loses; lose(); if (playerChoice == "rock" && comChoice == "scissors") ++winnings; winGame(); if (playerChoice == "paper" && comChoice == "rock") ++winnings; winGame(); if (playerChoice == "paper" && comChoice == "scissors") ++loses; lose(); if (playerChoice == "scissors" && comChoice == "rock") ++loses; lose(); if (playerChoice == "scissors" && comChoice == "paper") ++winnings; winGame(); }while (win != true); } int main () { beginGame(); return 0; }

    Read the article

  • Distributed Rendering in the UDK and Unity

    - by N0xus
    At the moment I'm looking at getting a game engine to run in a CAVE environment. So far, during my research I've seen a lot of people being able to get both Unity and the Unreal engine up and running in a CAVE (someone did get CryEngine to work in one, but there is little research data about it). As of yet, I have not cemented my final choice of engine for use in the next stage of my project. I've experience in both, so the learning curve will be gentle on both. And both of the engines offer stereoscopic rendering, either already inbuilt with ReadD (Unreal) or by doing it yourself (Unity). Both can also make use of other input devices as well, such as the kinect or other devices. So again, both engines are still on the table. For the last bit of my preliminary research, I was advised to see if either, or both engines could do distributed rendering. I was advised this, as the final game we make could go into a variety of differently sized CAVEs. The one I have access to is roughly 2.4m x 3m cubed, and have been duly informed that this one is a "baby" compared to others. So, finally onto my question: Can either the Unreal Engine, or Unity Engine make it possible for developers to allow distributed rendering? Either through in built devices, or by creating my own plugin / script?

    Read the article

  • How to avoid game objects accidentally deleting themselves in C++

    - by Tom Dalling
    Let's say my game has a monster that can kamikaze explode on the player. Let's pick a name for this monster at random: a Creeper. So, the Creeper class has a method that looks something like this: void Creeper::kamikaze() { EventSystem::postEvent(ENTITY_DEATH, this); Explosion* e = new Explosion; e->setLocation(this->location()); this->world->addEntity(e); } The events are not queued, they get dispatched immediately. This causes the Creeper object to get deleted somewhere inside the call to postEvent. Something like this: void World::handleEvent(int type, void* context) { if(type == ENTITY_DEATH){ Entity* ent = dynamic_cast<Entity*>(context); removeEntity(ent); delete ent; } } Because the Creeper object gets deleted while the kamikaze method is still running, it will crash when it tries to access this->location(). One solution is to queue the events into a buffer and dispatch them later. Is that the common solution in C++ games? It feels like a bit of a hack, but that might just be because of my experience with other languages with different memory management practices. In C++, is there a better general solution to this problem where an object accidentally deletes itself from inside one of its methods?

    Read the article

  • Tetris Movement - Implementation

    - by James Brauman
    Hi gamedev, I'm developing a Tetris clone and working on the input at the moment. When I was prototyping, movement was triggered by releasing a directional key. However, in most Tetris games I've played the movement is a bit more complex. When a directional key is pressed, the shape moves one space in that direction. After a short interval, if the key is still held down, the shape starts moving in the direction continuously until the key is released. In the case of the down key being pressed, there is no pause between the initial movement and the subsequent continuous movement. I've come up with a solution, and it works well, but it's totally over-engineered. Hey, at least I can recognize when things are getting silly, right? :) public class TetrisMover { List registeredKeys; Dictionary continuousPressedTime; Dictionary totalPressedTime; Dictionary initialIntervals; Dictionary continousIntervals; Dictionary keyActions; Dictionary initialActionDone; KeyboardState currentKeyboardState; public TetrisMover() { *snip* } public void Update(GameTime gameTime) { currentKeyboardState = Keyboard.GetState(); foreach (Keys currentKey in registeredKeys) { if (currentKeyboardState.IsKeyUp(currentKey)) { continuousPressedTime[currentKey] = TimeSpan.Zero; totalPressedTime[currentKey] = TimeSpan.Zero; initialActionDone[currentKey] = false; } else { if (initialActionDone[currentKey] == false) { keyActions[currentKey](); initialActionDone[currentKey] = true; } totalPressedTime[currentKey] += gameTime.ElapsedGameTime; if (totalPressedTime[currentKey] = initialIntervals[currentKey]) { continuousPressedTime[currentKey] += gameTime.ElapsedGameTime; if (continuousPressedTime[currentKey] = continousIntervals[currentKey]) { keyActions[currentKey](); continuousPressedTime[currentKey] = TimeSpan.Zero; } } } } } public void RegisterKey(Keys key, TimeSpan initialInterval, TimeSpan continuousInterval, Action keyAction) { if (registeredKeys.Contains(key)) throw new InvalidOperationException( string.Format("The key %s is already registered.", key)); registeredKeys.Add(key); continuousPressedTime.Add(key, TimeSpan.Zero); totalPressedTime.Add(key, TimeSpan.Zero); initialIntervals.Add(key, initialInterval); continousIntervals.Add(key, continuousInterval); keyActions.Add(key, keyAction); initialActionDone.Add(key, false); } public void UnregisterKey(Keys key) { *snip* } } I'm updating it every frame, and this is how I'm registering keys for movement: tetrisMover.RegisterKey( Keys.Left, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Left); }); tetrisMover.RegisterKey( Keys.Right, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Right); }); tetrisMover.RegisterKey( Keys.Down, TimeSpan.Zero, keyHoldMovementInterval, () = { PerformGravity(); }); Issues that this doesn't address: If both left and right are held down, the shape moves back and forth really quick. If a directional key is held down and the turn finishes and the shape is replaced by a new one, the new one will move quickly in that direction instead of the little pause it is supposed to have. I could fix the issues, but I think it will make the solution even worse. How would you implement this?

    Read the article

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Good GUI for OpenGL

    - by Cristina
    I am starting to learn OpenGL with FreeGLUT using the Superbible and the knowledge i have from my elementary graphics to brush up on my skills. To get more from this experience i want to integrate a GUI to overwrite the one FreeGLUT uses, now my question is this: is this thing possible and what library should i use? Some characteristics for the library: Open source Multi-platform (Linux and Windows) C/C++ If you have any other recommendations please feel free to post them along with your answers for my problem.

    Read the article

  • Which purpose do armor points serve?

    - by Bane
    I have seen a mechanic which I call "armor points" in many games: Quake, Counter Strike, etc. Generally, while the player has these armor points, he takes less damage. However, they act in a similar fashion that health points do: you lose them by taking said damage. Why would you design such a feature? Is this just health 2.0, or am I missing something? To me, armor only makes sense in, for example, RPG games, where it is a constant that determines your resistance. But I don't see why would it need to be reduceable during combat.

    Read the article

  • What does a Game Designer do? what skills do they need?

    - by xenoterracide
    I know someone who is thinking about getting into game design, and I wondered, what does the job game designer entail? what tools do you have to learn how to use? what unique skills do you need? what exactly is it you'd do from day to day. I may be wording this a bit wrong because I'm not sure if the college program is become a game designer or learn game design. but I think the same questions apply either way.

    Read the article

  • Resolving a collision between point and moving line

    - by Conundrumer
    I am designing a 2d physics engine that uses Verlet integration for moving points (velocities mentioned below can be derived), constraints to represent moving line segments, and continuous collision detection to resolve collisions between moving points and static lines, and collisions between moving/static points and moving lines. I already know how to calculate the Time of Impact for both types of collision events, and how to resolve moving point static line collisions. However, I can't figure out how to resolve moving/static point moving line collisions. Here are the initial conditions in a point and moving line collision event. We have a line segment joined by two points, A and B. At this instant, point P is touching/colliding with line AB. These points have unit mass and some might have an initial velocity, unless point P is static. The line is massless and has no explicit rotational component, since points A and B could freely move around, extending or contracting the line as a result (which will be fixed later by the constraint solver). Collision is inelastic. What are the final velocities of the points after collision?

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • In concept how is Animation done?

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. As a jump a perfect sine function is acceptable but for motions of arms, weapons or face it would look quite unnatural. Moreover patching every animation out of sine and cosine is stretched to its limits soon. I head of skeletons and rigging already. Although I could not implement skeletal animations I can't imagine that quite natural animations in major games are made of static predefined motion states. So how in general is animation done today?

    Read the article

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >