Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 550/1414 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Making AI jump on a spot effectively

    - by Pasquale Sada
    How to calculate, in 3D environment, the closest point, from which an AI character can jump onto a platform? Setup I have an initial velocity V(Vx,Vy,VZ) and a spot where the character stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a successful jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacles around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized; Ended up using the lowest angle (20 degree) and checking for collision on the trajectory. If found any increase the angle. Here some code (to improve the code maybe must stop the check at the highest point of the curve): Vector3 BallisticVel(Vector3 target, float angle) { Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 * a)); return vel * dir.normalized; } Vector3 TrajectoryPoint(Vector3 startingPosition, Vector3 startingVelocity, float n ) { float t = 1/60 ; // seconds per time step Vector3 stepVelocity = t * startingVelocity; // m/s Vector3 stepGravity = t * t * Physics.gravity; // m/s/s return startingPosition + n * stepVelocity + 0.5f * (n*n+n) * stepGravity; } bool CheckTrajectory(Vector3 startingPosition,Vector3 target, float angle_jump) { Debug.Log("checking"); if(angle_jump < 80f) { Debug.Log("if"); Vector3 startingVelocity = BallisticVel(target, angle_jump); for (int i = 0; i < 180; i++) { //Debug.Log(i); Vector3 trajectoryPosition = TrajectoryPoint( startingPosition, startingVelocity, i ); if(Physics.Raycast(trajectoryPosition,Vector3.forward,safeDistance)) { angle_jump += 10; break; // restart loop with the new angle } else continue; } return true; JumpVelocity = BallisticVel(target, angle_jump); } return false; }

    Read the article

  • What is the XACT API?

    - by EddieV223
    I wanted to use DirectMusic in my game, but it's not in the June 2010 SDK, so I thought that I had to use DirectSound. Then I saw the XAudio2.h header in the SDK's include folder and found that XAudio2 is the replacement for DirectSound. Both are low-level. During my research I stumbled across the XACT API, but can't find a good explanation on it. Is XACT to XAudio2 what DirectMusic was to DirectSound? By which I mean, is the XACT API a high-level, easier-to-use API for playing sounds that abstracts away the details of XAudio2? If not, what is it?

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Best way to load rigid bodies from file

    - by Mel
    I'm trying to switch to bullet for physics simulation. Lemme just say first that I am so pleased with bullet's accuracy and performance. After messing around it for a bit, I'm now trying to load rigid bodies from files. Most of my models are in blender and with some searching, I was able to export them in .bullet format. However, loading the files into bullet doesn't look like an easy task. I've come across this page that points me to a sample application that loads bullet files. But then it goes and says that this loader is just a starting point. Is there any open source library out there that will allow me to load rigid bodies from a file? I don't really wanna spend that much time trying to create my own loader.

    Read the article

  • Optimal Compression for Speech

    - by ashes999
    I'm designing a game that depends heavily on audio; I will have some 300+ speech files (most of them just a word or two long). This can very quickly escalate the size of my final game. What's the optimal way to encode/compress speech files to keep the size minimal without getting audio artifacts? Please address both per-file compression/encoding, and also zipping/compressing the set of all speech files together in your answer. Because I'm not sure which (or combination of both) factors will give me the best results. Edit: I need this to run in Silverlight and Android, so I'm presumably stuck with only MP3 as my option (other than uncompressed wave files).

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • Cocos2d Tiled Dynamic Object Layer

    - by Rodrigo Camargo
    I'm trying to develop a cocos2d tiled based game using a sort of 'dynamic' object layer. What I want to do is after the tiled map is loaded, the user can drag something into the map and that will become an event when the 'hero' pass over it. I know how to build an object layer in tiled but it seems that is for fixed positions and what I want is a dynamic action position based on what the user can select. For instance, the user can drag a rock into a tile and when the character hit that rock he may die, or something. I'm a little lost about how to make it work. Do you have any idea of what should I use or what should I look for? Thanks in advance!

    Read the article

  • How to create a reasonably sized urban area manually but efficiently

    - by Overv
    I have a game concept that only really works in an urban area that is of reasonable scale and diversity. In terms of what it should look like, think GTA, in terms of the size think more like a small neighbourhood with residents and a few local shops, perhaps a supermarket. I'm mostly experienced in programming and not at all with modelling, texturing or drawing, but I've found that SketchUp allows me to design interesting looking buildings that I model after real world buildings in my own neighbourhood. Designing these buildings and other objects can take from a few tens of minutes to a few hours. My question is: what is the best approach for a one man army like me who does manage to model buildings to create an interesting city environment in a reasonable amount of time? My game will not be based on procedural generation, the environment will actually be modelled like GTA cities.

    Read the article

  • Water Simulation in LIBGDX [on hold]

    - by Noah Huppert
    I am doing some R&D for a game and am now tackling the topic of water. The goal Make water that can flow. Aka you can have an origin point that water shoots out from or a downhill slope. Make it so water splashes, so when an object hits the water there is a splash. Aka: Actual physics water sim. The current way I know how to do it I know how to create a shader that makes an object look like its water by making waves. Combined with that you can check to see if an object is colliding and apply an upwards force to simulate buoyancy. What is wrong with that way The water does not flow No splashes Possible solutions Have particles that are fairly large that interact with each other to simulate water Possible drawbacks Performance. Question: Is there a better way to do water or is using particles as described the only way?

    Read the article

  • how difficult to add vibration/feedback to a open source driving game

    - by Jonathan Day
    Hi, I'm looking to use SuperTuxKart as a basis for a PhD research project. A key requirement for the game is to provide vibration feedback through the controller (obviously dependant on the controller itself). I don't believe that the game currently includes this feature and I'm trying to get a feel for how big a challenge it would be to add. My background is as a J2EE and PHP developer/architect, so I don't know C++ as such, but am prepared to give it a crack if there are resources and guides to assist, and it's not a herculean task. Alternatively, if you know of any open source games that do include vibration feedback, please feel free to let me know! Preferably the game would be of the style that the player had to navigate a character (or character's vehicle) over a repeatable course/map. TIA, JD

    Read the article

  • rotate player based off of joystick

    - by pengume
    Hey everyone I have this game that i am making in android and I have a touch screen joystick that moves the player around based on the joysticks position. I cant figure out how to also get the player to rotate at the same angle of the joystick. so when the joystick is to the left the players bitmap is rotated to the left as well. Maybe someone here has some sample code I could look at here is the joysticks class that I am using. `public class GameControls implements OnTouchListener { public float initx = DroidzActivity.screenWidth - 45; //255; // 320 og 425 public float inity = DroidzActivity.screenHeight - 45;//425; // 480 og 267 public Point _touchingPoint = new Point( DroidzActivity.screenWidth - 45, DroidzActivity.screenHeight - 45); public Point _pointerPosition = new Point(DroidzActivity.screenWidth - 100, DroidzActivity.screenHeight - 100); // ogx 220 ogy 150 private Boolean _dragging = false; private boolean attackMode = false; @Override public boolean onTouch(View v, MotionEvent event) { update(event); return true; } private MotionEvent lastEvent; public boolean ControlDragged; private static double angle; public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } // drag drop if (event.getAction() == MotionEvent.ACTION_DOWN) { if ((int) event.getX() > 0 && (int) event.getX() < 50 && (int) event.getY() > DroidzActivity.screenHeight - 160 && (int) event.getY() < DroidzActivity.screenHeight - 0) { setAttackMode(true); } else { _dragging = true; } } else if (event.getAction() == MotionEvent.ACTION_UP) { if(isAttackMode()){ setAttackMode(false); } _dragging = false; } if (_dragging) { ControlDragged = true; // get the pos _touchingPoint.x = (int) event.getX(); _touchingPoint.y = (int) event.getY(); // Log.d("GameControls", "x = " + _touchingPoint.x + " y = " //+ _touchingPoint.y); // bound to a box if (_touchingPoint.x < DroidzActivity.screenWidth - 75) { // og 400 _touchingPoint.x = DroidzActivity.screenWidth - 75; } if (_touchingPoint.x > DroidzActivity.screenWidth - 15) {// og 450 _touchingPoint.x = DroidzActivity.screenWidth - 15; } if (_touchingPoint.y < DroidzActivity.screenHeight - 75) {// og 240 _touchingPoint.y = DroidzActivity.screenHeight - 75; } if (_touchingPoint.y > DroidzActivity.screenHeight - 15) {// og 290 _touchingPoint.y = DroidzActivity.screenHeight - 15; } // get the angle setAngle(Math.atan2(_touchingPoint.y - inity, _touchingPoint.x - initx) / (Math.PI / 180)); // Move the ninja in proportion to how far // the joystick is dragged from its center _pointerPosition.y += Math.sin(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // og 180 70 _pointerPosition.x += Math.cos(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // make the pointer go thru if (_pointerPosition.x > DroidzActivity.screenWidth) { _pointerPosition.x = 0; } if (_pointerPosition.x < 0) { _pointerPosition.x = DroidzActivity.screenWidth; } if (_pointerPosition.y > DroidzActivity.screenHeight) { _pointerPosition.y = 0; } if (_pointerPosition.y < 0) { _pointerPosition.y = DroidzActivity.screenHeight; } } else if (!_dragging) { ControlDragged = false; // Snap back to center when the joystick is released _touchingPoint.x = (int) initx; _touchingPoint.y = (int) inity; // shaft.alpha = 0; } } public void setAttackMode(boolean attackMode) { this.attackMode = attackMode; } public boolean isAttackMode() { return attackMode; } public void setAngle(double angle) { this.angle = angle; } public static double getAngle() { return angle; } }` I should also note that the player has animations based on when he is moving or attacking.

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • Camera movement and threshold not working

    - by irish guy mcconagheh
    I have a platformer that is in progress, part of this has a camera which I only want to move when the character moves out of a certain threshold, to try to accomplish this I have the following if statement: if(((Mathf.Abs(target.transform.position.x))-(Mathf.Abs(transform.position.x)))>thres){ x = moveTo(transform.position.x, target.position.x, trackSpeed); } in unity/c#. In pseudocode it means if((absolute value of player x) - (absolute value of camera x) is greater than the threshold){ move { however this does not seem to work correctly. it appears to work for the first couple of times the threshold is reached, however the distance between the camera and the player has to increase every time for the camera to move. I do not believe the movement of the camera is the problem, however the code for it is as follows: private float moveTo(float n, float target, float accel) { if (n == target) { return n; } else { float dir = Mathf.Sign(target - n); n += accel * Time.deltaTime * dir; return (dir == Mathf.Sign(target-n))? n: target; } } }

    Read the article

  • Delaying a Foreach loop half a second

    - by Sigh-AniDe
    I have created a game that has a ghost that mimics the movement of the player after 10 seconds. The movements are stored in a list and i use a foreach loop to go through the commands. The ghost mimics the movements but it does the movements way too fast, in split second from spawn time it catches up to my current movement. How do i slow down the foreach so that it only does a command every half a second? I don't know how else to do it. Please help this is what i tried : The foreach runs inside the update method DateTime dt = DateTime.Now; foreach ( string commandDirection in ghostMovements ) { int mapX = ( int )( ghostPostition.X / scalingFactor ); int mapY = ( int )( ghostPostition.Y / scalingFactor ); // If the dt is the same as current time if ( dt == DateTime.Now ) { if ( commandDirection == "left" ) { switch ( ghostDirection ) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; dt.AddMilliseconds( 500 );// add half a second to dt break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; } } } }

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • 2D isometric picking

    - by Bikonja
    I'm trying to implement picking in my isometric 2D game, however, I am failing. First of all, I've searched for a solution and came to several, different equations and even a solution using matrices. I tried implementing every single one, but none of them seem to work for me. The idea is that I have an array of tiles, with each tile having it's x and y coordinates specified (in this simplified example it's by it's position in the array). I'm thinking that the tile (0, 0) should be on the left, (max, 0) on top, (0, max) on the bottom and (max, max) on the right. I came up with this loop for drawing, which googling seems to have verified as the correct solution, as has the rendered scene (ofcourse, it could still be wrong, also, forgive the messy names and stuff, it's just a WIP proof of concept code) // Draw code int col = 0; int row = 0; for (int i = 0; i < nrOfTiles; ++i) { // XOffset and YOffset are currently hardcoded values, but will represent camera offset combined with HUD offset Point tile = IsoToScreen(col, row, TileWidth / 2, TileHeight / 2, XOffset, YOffset); int x = tile.X; int y = tile.Y; spriteBatch.Draw(_tiles[i], new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); col++; if (col >= Columns) // Columns is the number of tiles in a single row { col = 0; row++; } } // Get selection overlay location (removed check if selection exists for simplicity sake) Point tile = IsoToScreen(_selectedTile.X, _selectedTile.Y, TileWidth / 2, TileHeight / 2, XOffset, YOffset); spriteBatch.Draw(_selectionTexture, new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); // End of draw code public Point IsoToScreen(int isoX, int isoY, int widthHalf, int heightHalf, int xOffset, int yOffset) { Point newPoint = new Point(); newPoint.X = widthHalf * (isoX + isoY) + xOffset; newPoint.Y = heightHalf * (-isoX + isoY) + yOffset; return newPoint; } This code draws the tiles correctly. Now I wanted to do picking to select the tiles. For this, I tried coming up with equations of my own (including reversing the drawing equation) and I tried multiple solutions I found on the internet and none of these solutions worked. Trying out lots of solutions, I came upon one that didn't work, but it seemed like an axis was just inverted. I fiddled around with the equations and somehow managed to get it to actually work (but have no idea why it works), but while it's close, it still doesn't work. I'm not really sure how to describe the behaviour, but it changes the selection at wrong places, while being fairly close (sometimes spot on, sometimes a tile off, I believe never more off than the adjacent tile). This is the code I have for getting which tile coordinates are selected: public Point? ScreenToIso(int screenX, int screenY, int tileHeight, int offsetX, int offsetY) { Point? newPoint = null; int nX = -1; int nY = -1; int tX = screenX - offsetX; int tY = screenY - offsetY; nX = -(tY - tX / 2) / tileHeight; nY = (tY + tX / 2) / tileHeight; newPoint = new Point(nX, nY); return newPoint; } I have no idea why this code is so close, especially considering it doesn't even use the tile width and all my attempts to write an equation myself or use a solution I googled failed. Also, I don't think this code accounts for the area outside the "tile" (the transparent part of the tile image), for which I intend to add a color map, but even if that's true, it's not the problem as the selection sometimes switches on approx 25% or 75% of width or height. I'm thinking I've stumbled upon a wrong path and need to backtrack, but at this point, I'm not sure what to do so I hope someone can shed some light on my error or point me to the right path. It may be worth mentioning that my goal is to not only pick the tile. Each main tile will be divided into 5x5 smaller tiles which won't be drawn seperately from the whole main tile, but they will need to be picked out. I think a color map of a main tile with different colors for different coordinates within the main tile should take care of that though, which would fall within using a color map for the main tile (for the transparent parts of the tile, meaning parts that possibly belong to other tiles).

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • ways to program glitch style effects

    - by okkk
    Most tutorials for generating glitch art usually has to do with some form of manipulation of the compression of files. Should my goal instead to replicate the look of these glitches in shaders or is it somehow possible to authentically generate the compression artifacts in real time? Example: This effect which I'm particularly interested is referred to as datamoshing. It does "things" using the p-frames of a video (frames that I think store just the change in pixels). I feel like I need a better understanding of both graphics programming and data-compression.

    Read the article

  • Rendering order in an Entity System

    - by Daedalus
    Say I use a basic ES approach, and also inside Systems I hold lists of all entities that Systems are required to process. How do I maintain this list of entities in desired rendering order, i.e. for a dumb 2D RenderingSystem? I saw this discussion, and what they suggest is to do something like Z ordering - what I would probably do is just to store a "layer" int in DrawableComponent and then, inside RenderingSystem, just sort entities by mentioned "layer" whenever the entity list for RenderingSystem changes. They also say we could just delete and recreate the entity whenever we want it on the top, but it seems too inflexible to me. How is this problem usually solved?

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >