Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 582/1567 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • How was collision detection handled in The Legend of Zelda: A Link to the Past?

    - by Restart
    I would like to know how the collision detection was done in The Legend of Zelda: A Link To The Past. The game is 16x16 tile based, so how did they do the tiles where only a quarter or half of the tile is occupied? Did they use a smaller grid for collision detection like 8x8 tiles, so four of them make one 16x16 tile of the texture grid? But then, they also have true half tiles which are diagonally cut and the corners of the tiles seem to be round or something. If Link walks into tiles corner he can keep on walking and automatically moves around it's corner. How is that done? I hope someone can help me out here.

    Read the article

  • Where should i organize my matrices in a 3D Game engine?

    - by Need4Sleep
    I'm working with a group of people from around the world to create a game engine(and hopefully a game with it) within the next upcoming years. My first task was writing a camera class for the engine to use in order to add cameras to the scene, position and follow points in the scene. The problem i have is with using matrices for transformations in the class, should i keep matrices separate to each class? such as have the model matrix in the model class, camera matrix in the camera class, or have all matrices placed in one class/chuck? I could see pros and cons for each method, but i wanted to hear some input form a more professional standpoint.

    Read the article

  • How to limit click'n'drag movement to an area?

    - by Vexille
    I apologize for the somewhat generic title. I'm really don't have much clue about how to accomplish what I'm trying to do, which is making it harder even to research a possible solution. I'm trying to implement a path marker of sorts (maybe there's a most suitable name for it, but this is the best I could come up with). In front of the player there will be a path marker, which will determine how the player will move once he finishes planning his turn. The player may click and drag the marker to the position they choose, but the marker can only be moved within a defined working area (the gray bit). So I'm now stuck with two problems: First of all, how exactly should I define that workable area? I can imagine maybe two vectors that have the player as a starting point to form the workable angle, and maybe those two arcs could come from circles that have their center where the player is, but I definetly don't know how to put this all together. And secondly, after I've defined the area where the marker can be placed, how can I enforce that the marker should only stay within that area? For example, if the player clicks and drags the marker around, it may move freely within the working area, but must not leave the boundaries of the area. So for example, if the player starts dragging the marker upwards, it will move upwards until it hits he end of the working area (first diagram below), but if after that the player starts dragging sideways, the marker must follow the drag while still within the area (second diagram below). I hope this wasn't all too confusing. Thanks, guys.

    Read the article

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • Rendering different materials in a voxel terrain

    - by MaelmDev
    Each voxel datapoint in my terrain model is made up of two properties: density and material type. Each is stored as an unsigned integer value (but the density is interpreted as a decimal value between 0 and 1). My current idea for rendering these different materials on the terrain mesh is to store eleven extra attributes in each vertex: six material values corresponding to the materials of the voxels that the vertices lie between, three decimal values that correspond to the interpolation each vertex has between each voxel, and two decimal values that are used to determine where the fragment lies on the triangle. The material and interpolation attributes are the exact same for each vertex in the triangle. The fragment shader samples each texture that corresponds to each material and then uses the aforementioned couple of decimal values to interpolate between these samples and obtain the final textured color of the fragment. It should work fine, but it seems like a big memory hog. I won't be able to reuse vertices in the mesh with indexing, and each vertex will have a lot of data associated with it. It also seems pretty slow. What are some ways to improve or replace this technique for drawing materials on a voxel terrain mesh?

    Read the article

  • Android threads trouble wrapping my head around design

    - by semajhan
    I am having trouble wrapping my head around game design. On the android platform, I have an activity and set its content view with a custom surface view. The custom surface view acts as my panel and I create instances of all classes and do all the drawing and calculation in there. Question: Should I instead create the instances of other classes in my activity? Now I create a custom thread class that handles the game loop. Question: How do I use this one class in all my activities? Or do I have to create a separate thread each time? In my previous game, I had multiple levels that had to create an instance of the thread class and in the thread class I had to set constructor methods for each separate level and in the loop use a switch statement to check which level it needs to render and update. Sorry if that sounds confusing. I just want to know if the method I am using is inefficient (which it probably is) and how to go about designing it the correct way. I have read many tutorials out there and I am still having lots of trouble with this particular topic. Maybe a link to a some tutorials that explain this? Thanks.

    Read the article

  • Unable to access jar. Why?

    - by SystemNetworks
    I was making a game in java and exported it as jar file. Then after that, I opeed jar splice. I added the libaries and exported jar. I added the natives then i made a main class. I created a fat jar and put it on my desktop. I'm using Mac OS X 10.8 Mountain Lion. When I put in the terminal, java -jar System Front.jar it says unable to access System Front.jar Even if i double click on the file, it doesen't show up! Help! I'm using slick. I added slick and lwjgl as libraries for the jar splice at the jars.

    Read the article

  • When does depth testing happen?

    - by Utkarsh Sinha
    I'm working with 2D sprites - and I want to do 3D style depth testing with them. When writing a pixel shader for them, I get access to the semantic DEPTH0. Would writing to this value help? It seems it doesn't. Maybe it's done before the pixel shader step? Or is depth testing only done when drawing 3D things (I'm using SpriteBatch)? Any links/articles/topics to read/search for would be appreciated.

    Read the article

  • Cocos2d and Body with few collision shapes using chipmunk

    - by Eimantas
    I'm using Cocos2d (0.99.5) with chipmunk physics engine. Currently I'm trying to place a body into space which is combined from few circle shapes. Let's say I have a corresponding sprite image with displays atom (nucleus + 3 electrons around it. Something like this without orbit lines). In it's simplest form - only one circle shape at the center should be enough which would detect collisions from other objects with nucleus. Now I'd like to add other circle shapes for each electron. How can I do that? Now when I add those shapes to the body and add the body into chipmunk space - the shapes (together with the body/sprite) start flickering and spinning with no recognizable pattern (or reason for that matter).

    Read the article

  • Unity3D: How to make the camera focus a moving game object with ITween?

    - by nathan
    I'm trying to write a solar system with Unity3D. Planets are sphere game objects rotating around another sphere game object representing the star. What i want to achieve is let the user click on a planet and then zoom the camera on this planet and then make the camera follow and keep it centered on the screen while it keep moving around the star. I decided to use iTween library and so far i was able to create the zoom effect using iTween.MoveUpdate. My problem is that the focused planet does not say properly centered as it moves. Here is the relevant part of my script: void Update () { if (Input.GetButtonDown("Fire1")) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hit; if (Physics.Raycast(ray, out hit, Mathf.Infinity, concernedLayers)) { selectedPlanet = hit.collider.gameObject; } } } void LateUpdate() { if (selectedPlanet != null) { Vector3 pos = selectedPlanet.transform.position; pos.z = selectedPlanet.transform.position.z - selectedPlanet.transform.localScale.z; pos.y = selectedPlanet.transform.position.y; iTween.MoveUpdate(Camera.main.gameObject, pos, 2); } } What do i need to add to this script to make the selected planet stay centered on the screen? I hosted my current project as a webplayer application so you see what's going wrong. You can access it here.

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug?

    Read the article

  • UDK ParticleSystem Problem

    - by EmAdpres
    I use below code to create my particles ( in fact, I don't know any other way.) spawnedParticleComponents = WorldInfo.MyEmitterPool.SpawnEmitter(ParticleSystem'ParticleName', Location, Rotator ); spawnedParticleComponents.setTranslation(newLocation); ... And unfortunately, I spawn many particles in game. When I play my game, after some time, I see Exceeded max active pooled emitters! Warning in console . To solve the problem, first, I tried spawnedParticleComponents.DeactivateSystem();, But it doesn't help. Then I try WorldInfo.MyEmitterPool.ClearPoolComponents(false);, But it doesn't either . :( How can I destroy many spawned particles and avoid this warning ?

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • How do I make a more or less realistic water surface?

    - by Johnny
    I want to make a similar water surface like in this picture: http://www.publicdomainpictures.net/pictures/20000/velka/water-surface-detail-11291208064MpI.jpg I need the water surface in the same view than in the picture. Is it possible to work without shaders? I want to develop a little game for Xbox Live Indie Marketplace, Windows Phone and maybe later iPhone/iPad. How should I make the water surface, so that it works on multiple platforms?

    Read the article

  • Distance between two 3D objects' faces

    - by Arthur Gibraltar
    I'm really newbie on programming and I'm making some tests. I couldn't find nowhere on Internet how could I calculate the distance between two 3D objects' faces. Is there anyway? Detailing, as an example, I have two 3D cubes. Each one has a vector3 position designating it's center on the 3D space and an orientation matrix. And each cube has a size (float width, float height and float length). I could get a simple distance between them by calling Vector3.Distance(), but it doesn't consider its sizes, just the position. Then the distance would be between its centers. Is there any way to calculate the distance between the faces? Thanks for any reply.

    Read the article

  • can't spot the error. Trying to increment

    - by Kevin Jensen Petersen
    I really can't spot the error, or the misspelling. This script should increase the variable currentTime with 1 every second, as long as i am holding the Space button down. This is Unity C#. using UnityEngine; using System.Collections; public class GameTimer : MonoBehaviour { //Timer private bool isTimeDone; public GUIText counter; public int currentTime; private bool starting; //Each message will be shown random each 20 seconds. public string[] messages; public GUIText msg; //To check if this is the end private bool end; void Update () { counter.guiText.text = currentTime.ToString(); if(Input.GetKey(KeyCode.Space)) { if(starting == false) { starting = true; } if(end == false) { if(isTimeDone) { StartCoroutine(timer()); } } else { msg.guiText.text = "You think you can do better? Press 'R' to Try again!"; if(Input.GetKeyDown(KeyCode.R)) { Application.LoadLevel(Application.loadedLevel); } } } if(!Input.GetKey(KeyCode.Space) & starting) { end = true; } } IEnumerator timer() { isTimeDone = false; yield return new WaitForSeconds(1); currentTime++; isTimeDone = true; } }

    Read the article

  • The technology behind 22can's curiosity

    - by Cameron Scully
    I don't have alot of experience with mobile apps and I definitely don't know much about MMO's but I was wondering what the basic architecture of a game like that would be (understandably some don't consider it a game, but it must use some game theory and implementation). Mainly, how are they able to send/recieve real time feed back of the cube being chipped away by thousands of players on their mobile devices? How is data of the cube's millions of pieces stored and accessed so quickly? Thanks

    Read the article

  • Align tetrahedrons

    - by thedeadlybutter
    I'm currently generating tetrahedron meshes in Unity When a player clicks the side of a mesh, a new one spawns aligned with it, like this. I'm not sure how nor can I find any information on implementing a tetra hedron grid. I tried playing around with the vertices until I realized I need to adjust position & rotation. Any ideas? EDIT: To be clear, the second image was manually placed objects in the Unity Editor. I'm looking to make an algorithm that places the meshes correctly.

    Read the article

  • Character creation using spritesheets

    - by Patrick Developer
    I am currently creating a 2D fighting game and have implemented a system where upon starting a new game, the player is presented with the option to create a custom character. I have a set of string arrays set with values that correspond to hair, headgear, chest, lower body and shoes. When done selecting a variety of items from the lists, a code is generated based off the index of each item (i.e 01123), which is then used to assign the correct Spritesheet to the player character. This has already presented a lot of work as I have had to create quite a few spreadsheets based of possible combinations, but I am now looking at a massive amount of work to implement each variation. I have started to look into setting layers for each item to reduce workload, but I am also looking at having different stances for the character - Depending on the currently equipped weapon - so this may present a lot of work either way. My question is, do I have any alternatives or am I stuck creating masses of Spritesheets to cover all combinations? As a side note, how much impact will assigning layered items have on overall performance?

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • Android/Java AI agent framework/middleware

    - by corneliu
    I am looking for an AI agent framework to use as a starting point in an Android game I have to create for a university research project. It has been suggested to me to use JADE, but, as far as I can tell, it's not a suitable framework for games (at least for my game idea) because it runs in a split-execution mode, and it needs an always-active network connection to a main host. What I want is just a little something to give me a headstart. I am willing to adjust the game's features to the framework because it's more of a mockup game, and the purpose is to compare the performance of a couple of agents in the game world. The game will be very simplistic, with a minimal UI that displays various stats about the characters in the game (so no graphics, no pathfinding). Thank you.

    Read the article

  • Tool for creating complex paths?

    - by TerryB
    I want to create some fairly complex predefined paths for my AI sprites to follow. I'll need to use curves, splines etc to get the effect I want. Is there a drawing tool out there that will allow me to draw such curves, "mesh" them by placing lots of points along them at some defined density and then output the coordinates of all of those points for me? I could write this tool myself but hopefully one of the drawing packages can do this? Cheers!

    Read the article

  • Help needed throwing a ball in AS3

    - by Opoe
    I'm working on a flash game, coding on the time line. What I'm trying to accomplish is the following: With the mouse you swing and throw/release a ball which bounces against the walls and eventualy comes to point where it lays still (like a real ball). I allmost had it working, but now the ball sticks to the mouse, in stead of being released, my question to you is: Can you help me make this work and explain to me what I did wrong? You can simply preview my code by making a movieclip named 'circle' on a 550x400 stage. stage.addEventListener(Event.ENTER_FRAME, circle_update); var previousPostionX:Number; var previousPostionY:Number; var throwSpeedX:Number; var throwSpeedY:Number; var isItDown:Boolean; var xSpeed:Number = 0; var ySpeed:Number = 0; var friction:Number = 0.96; var offsetX:Number = 0; var offsetY:Number = 0; var newY:Number = 0; var oldY:Number = 0; var newX:Number = 0; var oldX:Number = 0; var dragging:Boolean; circle.buttonMode = true; circle.addEventListener(MouseEvent.MOUSE_DOWN, mouseDownHandler); circle.addEventListener(Event.ENTER_FRAME, throwcircle); circle.addEventListener(MouseEvent.MOUSE_DOWN, clicked); circle.addEventListener(MouseEvent.MOUSE_UP, released); function mouseDownHandler(e:MouseEvent):void { dragging = true; stage.addEventListener(MouseEvent.MOUSE_UP, mouseUpHandler); offsetX = mouseX - circle.x; offsetY = mouseY - circle.y; } function mouseUpHandler(e:MouseEvent):void { dragging = false; } function throwcircle(e:Event) { circle.x += xSpeed; circle.y += ySpeed; xSpeed *= friction; ySpeed *= friction; } function changeFriction(e:Event):void { friction = e.target.value; trace(e.target.value); } function circle_update(e:Event){ if ( dragging == true ) { circle.x = mouseX - offsetX; circle.y = mouseY - offsetY; } if(circle.x + (circle.width * 0.50) >= 550){ circle.x = 550 - circle.width * 0.50; } if(circle.x - (circle.width * 0.50) <= 0){ circle.x = circle.width * 0.50; } if(circle.y + (circle.width * 0.50) >= 400){ circle.y = 400 - circle.height * 0.50; } if(circle.y - (circle.width * 0.50) <= 0){ circle.y = circle.height * 0.50; } } function clicked(theEvent:Event) { isItDown =true; addEventListener(Event.ENTER_FRAME, updateView); } function released(theEvent:Event) { isItDown =false; } function updateView(theEvent:Event) { if (isItDown==true){ throwSpeedX = mouseX - previousPostionX; throwSpeedY = mouseY - previousPostionY; circle.x = mouseX; circle.y = mouseY; } else{ circle.x += throwSpeedX; circle.y += throwSpeedY; throwSpeedX *=0.9; throwSpeedY *=0.9; } previousPostionX= circle.x; previousPostionY= circle.y; }

    Read the article

  • High level project workflow

    - by user775060
    We are a small software company trying our hand at our second game. Since our first games' process was a living nightmare (since we used webdevelopment workflow) I have decided to educate myself on how to manage a game project on a high level. How does your process work, from idea to launch? Preferably in situations where you have a team that needs to cooperate. I've seen these 2 links, which are useful in a way, but was wondering if there are better/more comprehensive ways to do this? http://www.goodcontroller.com/blog/?p=136 http://gogogic.wordpress.com/2009/02/09/symbol6-how-we-created-an-iphone-game/ All input would be infinitely appreciated.

    Read the article

  • How to display a hierarchical skill tree in php

    - by user3587554
    If I have skill data set up in a tree format (where earlier skills are prerequisites for later ones), how would I display it as a tree, using php? The parent would be on top and have 3 children. Each of these children can then have one more child so its parent would be directly above it. I'm having trouble figuring out how to add the root element in the middle of the top div, and the child of the children below each child of the root. I'm not looking for code, but an explanation of how to do it. My data in array form is this: Data: Array ( [1] => Array ( [id] => 1 [title] => Jutsu [description] => Skill that makes you awesomer at using ninjutsu [tiers] => 1 [prereq] => [image] => images/skills/jutsu.png [children] => Array ( [2] => Array ( [id] => 2 [title] => fireball [description] => Increase your damage with fire jutsu and weapons [tiers] => 5 [prereq] => 1 [image] => images/skills/fireball.png [children] => Array ( [5] => Array ( [id] => 5 [title] => pin point [description] => Increases jutsu accuracy [tiers] => 5 [prereq] => 2 [image] => images/skills/pinpoint.png ) ) ) [3] => Array ( [id] => 3 [title] => synergy [description] => Reduce the amount of chakra needed to use ninjutsu [tiers] => 1 [prereq] => 1 [image] => images/skills/synergy.png ) [4] => Array ( [id] => 4 [title] => ebb & flow [description] => Increase the damage of water jutsu, water weapons, and reduce the damage of jutsu and weapons that use water element [tiers] => 5 [prereq] => 1 [image] => images/skills/ebbandflow.png [children] => Array ( [6] => Array ( [id] => 6 [title] => IQ [description] => Decrease the time it takes to learn a jutsu [tiers] => 5 [prereq] => 4 [image] => images/skills/iq.png ) ) ) ) ) ) An example would be this demo image minus the hover stuff.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >