Search Results

Search found 26124 results on 1045 pages for 'unreal development kit'.

Page 570/1045 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Cocos 2D - Hold down CCMenuItem

    - by Will Youmans
    I am using the following code to move a CCSprite left and right. -(id)init{ CCMenuItemImage * moveLeftButton = [CCMenuItemImage itemFromNormalImage:@"Move Left Button.png" selectedImage:@"Move Left Button.png" target:self selector:@selector(moveLeftVoid:)]; } -(void)moveLeftVoid{ id moveLeft = [CCMoveBy actionWithDuration:.3 position:ccp(-10, 0)]; [_mainSprite runAction:moveLeft]; } This does work, but only as a single tap. What I want for the CCSprite to move continously in that direction when the CCMenuItem is held down. Then when it's released the character stops moving. If you need to see more code, please just ask. :) Thanks

    Read the article

  • Base on User Drawing Create Polygon Body as well Image

    - by Siddharth
    In my game, I want to provide a user with drawing feature. By free hand drawing user create a polygon shape. So in my game implementation I have to create body for all found vertices and I have to generate image based on that polygon shape. So my problem is how to create image that match the user provided vertices. In cocos2d I listen that there is an implementation of something like Image Masking. But I don't understand how that thing I implement in andengnine. Please provide any guidance on how to create image same as user generated polygon shape.

    Read the article

  • What's the best way to handle slopes for a platfomer game using Box2D

    - by songokuhd
    I would like to know if there is any known solution for handling the player's movement on slopes using Box2D engine. I tried to do it using a circle as the player. Everything was fine until I tried to walk on slopes, the main problem is that due to gravity, the circle does not stop on the slope. Please if somebody has tried this before I'll appreciate it. If you have a better solution without the physics engine would be fine for me too. Thank you.

    Read the article

  • Modular spaceship control

    - by SSS
    I am developing a physics based game with spaceships. A spaceship is constructed from circles connected by joints. Some of the circles have engines attached. Engines can rotate around the center of circle and create thrust. I want to be able to move the ship in a direction or rotate around a point by setting the rotation and thrust for each of the ship's engines. How can I find the rotation and thrust needed for each engine to achieve this?

    Read the article

  • How can I set the rotation of a shape to the same as my image?

    - by BleedObsidian
    The way you set rotations of images is different from setting shape rotations. So how can I make the shape have the same rotation as my image? This is how my image rotates: if(input.isKeyDown(Input.KEY_RIGHT)) { rotate += rotateSpeed * delta; image.rotate(rotate - image.getRotation()); } How can I get the same effect but with a shape? For example: How can I get that rectangle to be at the same rotation as the car?

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

  • Finding cubes in frustum

    - by salmonmoose
    Working with an infinite set of cubes, is there a way of detecting which cubes exist within a frustum? Most frustum culling seems to work along the lines of running through all objects and seeing if they intersect - this is ok with a finite set of objects, or something like Octrees. I'm currently finding all cubes within the frustum's bounding box - but that's far more than I really need. I could then test these all against it, but I was wondering if I could skip a step.

    Read the article

  • How do I find a unit vector of another in Java?

    - by Shijima
    I'm writing a Java formula based on this tutorial: 2-D elastic collisions without Trigonometry. I am in the section "Elastic Collisions in 2 Dimensions". Part of step 1 says: Next, find the unit vector of n, which we will call un. This is done by dividing by the magnitude of n. My below code represents the normal vector of 2 objects (I'm using a simple array to represent the normal vector). int[] normal = new int[2]; normal[0] = ball2.x - ball1.x; normal[1] = ball2.y - ball1.y; I am unsure what the tutorial means by dividing the magnitude of n to get the un. What is un? How can I calculate it with my Java array?

    Read the article

  • Assigning an item to an existing array in a list within a dictionary [on hold]

    - by Rouke
    I have a Dictionary declared like: public var PoolDict : Dictionary.<String, List.<GameObject[]> >; I made a function to add items to the list and array function Add(key:String, obj:GameObject) { if(!PoolDict.ContainsKey(key)) { PoolDict[key] = new List.<GameObject[]>(); } //PlaceHolder - Not what will be in final version PoolDict[key].Add(null); //Attempts - Errors- How to add to existing array? PoolDict[key].Add(obj); PoolDict[key][0].Add(obj); } I'd like to replace the line after //PlaceHolder with code that will assign a gameObject to an existing array in a list that's associated with a key. How could this be done?

    Read the article

  • Touch Event Not Work With PinchZoom

    - by Siddharth
    I was implemented pinch zoom functionality for my tower of defense game. I manage different entity to display all the towers. From that entity game player select the tower and drag to the actual position where he want to plot the tower. I set the entity in HUD also, so user scroll and zoom the region, the tower become visible all the time. Basically I have created the different entity to show and hide towers only so I can manage it easily. My problem is when I have not perform the scroll and zoom the tower touch and the dragging easily done but when I zoom and scroll the scene at that time the tower touch event does not call so the player can not able to drag and drop it to actual position. So anybody please help me to come out of it.

    Read the article

  • Windows Phone 7 Networked Game

    - by Craig
    Im creating a multiplayer asteroids type game for the Windows Phone 7, 2 players can challenge each other over who will get the highest score. On each players phone the opponent is displayed and both go about shooting asteroids and enemies. In an assignment I have due I would like to talk about the packet design, what would be the least amount of info that I can send over the connection? Instead of constantly having to send each players position, asteroid position, bullet position and enemy position etc. Or would all that data constantly need to be sent?

    Read the article

  • Moving two objects proportionally

    - by SSL
    I'm trying to move two objects away from each other at a proportional distance, but on different scales. I'm not quite sure how to do this. Object A can go from position 0.1 to 1. Object B has no limits. If object B is decreasing, then Object A should be decreasing at rate R. Likewise, if Object B is increasing, then Object A increases at rate R. How can I tie these two Object positions together so that in an update loop, they automatically update their positions? I tried using: ObjA.Pos += 0.001f * ObjB.VelocityY; //0.001f is the rate This works but there's an error each time it runs. ObjA starts off at its max position 1 but then the next time it will stop at 0.97, 0.94, 0.91 etc.. This is due to the 0.001f rate I put in. Is there a way to control the rate, yet not end up with the rounding error?

    Read the article

  • Confusing box2d forces

    - by Diken
    Hello Friends. This is my demo game screen-shoot. Here i am using three buttons. Right-bottom button is used for jump and left-bottom buttons used for move left and right. I have some questions 1) should i use linearImpuls for jump body?? 2) For move right and left which types of force i applied??? PLease tell me i am confusing to use linearImpuls, applyforce and linearVelocity. Thanks in advance

    Read the article

  • How to pause and unpause the animation of a sprite?

    - by user1609578
    My game has a sprite representing a character. When the character picks up an item, the sprite should stop moving for a period of time. I use CCbezier to make the sprite move, like this: sprite->runaction(x) Now I want the sprite to stop its current action (moving) and later resume it. I can make the sprite stop by using: sprite->stopaction(x) but if I do that, I can't resume the movement. How can I do that?

    Read the article

  • Whats the right program for me?

    - by andyphillips20
    I would like to learn C++ so I can get a job in the game industry, but there are so many options it a little confusing. I know most of you will say I should read up on C++ before attempting to program it but I learn best by doing things rather then reading. That being said I don't understand some of the thing suggested on other questions, because I've read a few trying to find whats right for me, so putting things in the simplest terms would be helpful. I've been making a couple of games 2d using gamemaker and if theres a C++ equivalent that would be perfect, but if not possible I would like an IDE that allows me to easily continue making 2d games, and is fairly simple to learn. Having a 2d sprite editor would be a nice plus but I can understand if its not every thing I want in one program

    Read the article

  • sprite animation in openGL

    - by Sid
    I am facing problems on implementing sprite animation in opneGL ES. i've googled it and the only thing i am getting is the Tutorial implementing via Canvas. i know the way but i am having problems in implementing it. What i need : A sprite animation on collision detection. What i did : Collision Detection function working properly. PS : everything is working fine but i want to implement the animation in OPENGL ONLY. Canvas won't work in my case.

    Read the article

  • PHP-How to choose XML section based on an attribute?

    - by Vincent
    All, I have a config xml file in the following format: <?xml version="1.0"?> <configdata> <development> <siteTitle>You are doing Development</siteTitle> </development> <test extends="development"> <siteTitle>You are doing Testing</siteTitle> </test> <production extends="development"> <siteTitle>You are in Production</siteTitle> </production> </configdata> To read this config file to apply environment settings, currently I am using, the following code in index.php file: $appEnvironment = "production"; $config = new Zend_Config_Xml('/config/settings.xml', $appEnvironment ); To deploy this code on multiple environments, as user has to change index.php file. Instead of doing that, is it possible to maintain an attribute in the xml file, "say active=true". Based on which the Zend_Config_Xml will know which section of the xml file settings to read? Thanks

    Read the article

  • How do audio based games such as Audiosurf and Beat Hazard work?

    - by The Communist Duck
    Note: I am not asking how to make a clone of one of these. I am asking about how they work. I'm sure everyone's seen the games where you use your own music files (or provided ones) and the games produce levels based on them, such as Audiosurf and Beat Hazard. Here is a video of Audiosurf in action, to show what I mean. If you provide a heavy metal song, you would get a completely different set of obstacles, enemies, and game experience from something like Vivaldi. What does interest me is how these games work. I do not know much about audio (well, data-side), but how do they process the song to understand when it is settling down or when it's speeding up? I guess they could just feed the pitch values (assuming those sorts of things exist in audio files) to form a level, but it wouldn't fully explain it. I'm either looking for an explanation, some links to articles about this sort of thing (I'm sure there's a term or terms for it), or even an open-source implementation of this kind of thing ;-) EDIT: After some searching and a little help, I found out about FFT (Fast Fourier Transform). This maybe a step in the right direction, but it is something that does not make any sense to me..or fits with my physics knowledge of waves.

    Read the article

  • How to move the rigidbody at the position of the mouse on release

    - by Edvin
    I'm making a "Can Knockdown" game and I need the rigidbody to move where the player released the mouse(OnMouseUp). Momentarily the Ball moves OnMouseUp because of rigidbody.AddForce(force * factor); and It moves toward the mousePosition but doesn't end up where the mousePosition is. Here's what I have so far in the script. var factor = 20.0; var minSwipeDistY : float; private var startTime : float; private var startPos : Vector3; function OnMouseDown(){ startTime = Time.time; startPos = Input.mousePosition; startPos.z = transform.position.z - Camera.main.transform.position.z; startPos = Camera.main.ScreenToWorldPoint(startPos); } function OnMouseUp(){ var endPos = Input.mousePosition; endPos.z = transform.position.z - Camera.main.transform.position.z; endPos = Camera.main.ScreenToWorldPoint(endPos); var force = endPos - startPos; force.z = force.magnitude; force /= (Time.time - startTime); rigidbody.AddForce(force * factor); }

    Read the article

  • How do I apply an arcball (using quaternions) along with mouse events, to allow the user to look around the screen using the o3d webgl framework?

    - by Chris
    How do I apply an arcball (using quaternions) along with mouse events, to allow the user to look around the screen using the o3d webgl framework? This sample (http://code.google.com/p/o3d/source/browse/trunk/samples_webgl/o3d-webgl-samples/simpleviewer/simpleviewer.html?r=215) uses the arcball for rotating the transform of an "object", but rather than apply this to a transform, I would like to apply the rotation to the camera's target, to create a first person style ability to look around the scene, as if the camera is inside the centre of the arcball instead of rotating from the outside. The code that is used in this sample is var rotationQuat = g_aball.drag([e.x, e.y]); var rot_mat = g_quaternions.quaternionToRotation(rotationQuat); g_thisRot = g_math.matrix4.mul(g_lastRot, rot_mat); The code that I am using which doesn't work var rotationQuat = g_aball.drag([e.x, e.y]); var rot_mat = g_quaternions.quaternionToRotation(rotationQuat); g_thisRot = g_math.matrix4.mul(g_lastRot, rot_mat); var cameraRotationMatrix4 = g_math.matrix4.lookAt(g_eye, g_target, [g_up[0], g_up[1] * -1, g_up[2]]); var cameraRotation = g_math.matrix4.setUpper3x3(cameraRotationMatrix4,g_thisRot); g_target = g_math.addVector(cameraRotation, g_target); where am I going wrong? Thanks

    Read the article

  • Framework to implement an in game gui editor

    - by momboco
    I need to do an in game gui editor. The game engine has his own widgets elements and I don't want a gui library that substitute it. The most difficult task is the implementation of the functionality that makes usable to artists and designers. Positioning Resize Alignment between some elements Multiselection Relationship between children and parents Add guides Magnet to place elements quickly Use of layers Undo / Redo ... I'm searching a framework or something like, with these functionalities implemented. And a form of append my own engine to make use of it. It would be ideally a mixing between a tool like Photoshop and libRocket ( engine rendering independent ).

    Read the article

  • Libgdx change color of Texture at runtime [on hold]

    - by Springrbua
    i allready asked this on Stackoverflow, but i think this question may belong here. In a Libgdx game i have some Animations for my Player. All the Frames for this Animation are inside a TextureAtlas. The Player Textures show a human, with a white T-Shirt. The T-Shirt is the only white part of the Player. Now i want to be able to replace the white color with red for Player1, with green for Player2 and so on. How can i do that, without loosing the advantage of the TextureAtlas (Texture switching)? Ofc 1 way would be to store 4 versions of every Frame, for 4 different Players (colors). But there are games out there, where you can fully customize the Player, give him a blue hat, red pants and pink shirts and so on. How can this be done? Thanks a lot! EDIT: The question on Stackoverflow

    Read the article

  • (Phaser) Preload Future States in Create?

    - by Brian
    I'm a first time user of Phaser, been trying to make a simple point and click type game. I'm trying to keep things very modular, so I'm defining a list of levels (states) in a JSON, and then every level has its own JSON containing the objects within that level. However, I'm encountering an issue in that, when changing states, I get a black flash while the assets for the next state load (this happens whether I iterate through the JSON list or define everything manually). From what I've read, all sprites should be loaded in the preload stage, however, by doing this I'm causing that tiny but noticeable black pause. I know one way would be to simply load every asset at the start of the game, but that seems incredibly inefficient (wouldn't that fill up the memory immensely?). I would rather load a state's assets from the "parent" state. However, in my quick test (which maybe I did wrong) it seems that game.load doesn't work properly if done within the create stage? What is the best approach to doing this?

    Read the article

  • Rectangular Raycasting?

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Where in a typical rendering pipeline does visibility and shading occur?

    - by user29163
    I am taking a computer graphics course. The book and the lecture notes are vague on the on the order of flow between the different steps in the rendering process. For example, if we have specified a view in a scene, and then want to perform a projection transformation for that given view, then we have to go through a sequence of transformations. In the end we end up with a normalized "viewcube" ready to be mapped 2D after clipping. But why do we end up with a cube (ie 3D thing), when a projection results in projecting the 3D objects to 2D. (depth information is lost?) The other line of reasoning is that all information further needed is stored within the "cube" and that visibility detection and shading is performed with respect to this cube and then we perform rasterezation.

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >