Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 493/1130 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Particle effect after the bullet

    - by Siddharth
    In my game, I fire a bullet from the gun along with that I generate a particle behind the bullet so that I look like fire effect after the bullet. But my problem is that the position I got from the bullet was distance in place. So basically I want to say that the bullet speed was high for that reason I got coordinate for the particle generation was far from each other like dot dot effect. But I want continuous flow of particle behind the bullet. So please provide any guidance for my problem

    Read the article

  • Delaying a Foreach loop half a second

    - by Sigh-AniDe
    I have created a game that has a ghost that mimics the movement of the player after 10 seconds. The movements are stored in a list and i use a foreach loop to go through the commands. The ghost mimics the movements but it does the movements way too fast, in split second from spawn time it catches up to my current movement. How do i slow down the foreach so that it only does a command every half a second? I don't know how else to do it. Please help this is what i tried : The foreach runs inside the update method DateTime dt = DateTime.Now; foreach ( string commandDirection in ghostMovements ) { int mapX = ( int )( ghostPostition.X / scalingFactor ); int mapY = ( int )( ghostPostition.Y / scalingFactor ); // If the dt is the same as current time if ( dt == DateTime.Now ) { if ( commandDirection == "left" ) { switch ( ghostDirection ) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; dt.AddMilliseconds( 500 );// add half a second to dt break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; } } } }

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

  • Making AI jump on a spot effectively

    - by Pasquale Sada
    How to calculate, in 3D environment, the closest point, from which an AI character can jump onto a platform? Setup I have an initial velocity V(Vx,Vy,VZ) and a spot where the character stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a successful jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacles around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized; Ended up using the lowest angle (20 degree) and checking for collision on the trajectory. If found any increase the angle. Here some code (to improve the code maybe must stop the check at the highest point of the curve): Vector3 BallisticVel(Vector3 target, float angle) { Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 * a)); return vel * dir.normalized; } Vector3 TrajectoryPoint(Vector3 startingPosition, Vector3 startingVelocity, float n ) { float t = 1/60 ; // seconds per time step Vector3 stepVelocity = t * startingVelocity; // m/s Vector3 stepGravity = t * t * Physics.gravity; // m/s/s return startingPosition + n * stepVelocity + 0.5f * (n*n+n) * stepGravity; } bool CheckTrajectory(Vector3 startingPosition,Vector3 target, float angle_jump) { Debug.Log("checking"); if(angle_jump < 80f) { Debug.Log("if"); Vector3 startingVelocity = BallisticVel(target, angle_jump); for (int i = 0; i < 180; i++) { //Debug.Log(i); Vector3 trajectoryPosition = TrajectoryPoint( startingPosition, startingVelocity, i ); if(Physics.Raycast(trajectoryPosition,Vector3.forward,safeDistance)) { angle_jump += 10; break; // restart loop with the new angle } else continue; } return true; JumpVelocity = BallisticVel(target, angle_jump); } return false; }

    Read the article

  • how difficult to add vibration/feedback to a open source driving game

    - by Jonathan Day
    Hi, I'm looking to use SuperTuxKart as a basis for a PhD research project. A key requirement for the game is to provide vibration feedback through the controller (obviously dependant on the controller itself). I don't believe that the game currently includes this feature and I'm trying to get a feel for how big a challenge it would be to add. My background is as a J2EE and PHP developer/architect, so I don't know C++ as such, but am prepared to give it a crack if there are resources and guides to assist, and it's not a herculean task. Alternatively, if you know of any open source games that do include vibration feedback, please feel free to let me know! Preferably the game would be of the style that the player had to navigate a character (or character's vehicle) over a repeatable course/map. TIA, JD

    Read the article

  • Cocos2d Tiled Dynamic Object Layer

    - by Rodrigo Camargo
    I'm trying to develop a cocos2d tiled based game using a sort of 'dynamic' object layer. What I want to do is after the tiled map is loaded, the user can drag something into the map and that will become an event when the 'hero' pass over it. I know how to build an object layer in tiled but it seems that is for fixed positions and what I want is a dynamic action position based on what the user can select. For instance, the user can drag a rock into a tile and when the character hit that rock he may die, or something. I'm a little lost about how to make it work. Do you have any idea of what should I use or what should I look for? Thanks in advance!

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • What is the XACT API?

    - by EddieV223
    I wanted to use DirectMusic in my game, but it's not in the June 2010 SDK, so I thought that I had to use DirectSound. Then I saw the XAudio2.h header in the SDK's include folder and found that XAudio2 is the replacement for DirectSound. Both are low-level. During my research I stumbled across the XACT API, but can't find a good explanation on it. Is XACT to XAudio2 what DirectMusic was to DirectSound? By which I mean, is the XACT API a high-level, easier-to-use API for playing sounds that abstracts away the details of XAudio2? If not, what is it?

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • 2D isometric picking

    - by Bikonja
    I'm trying to implement picking in my isometric 2D game, however, I am failing. First of all, I've searched for a solution and came to several, different equations and even a solution using matrices. I tried implementing every single one, but none of them seem to work for me. The idea is that I have an array of tiles, with each tile having it's x and y coordinates specified (in this simplified example it's by it's position in the array). I'm thinking that the tile (0, 0) should be on the left, (max, 0) on top, (0, max) on the bottom and (max, max) on the right. I came up with this loop for drawing, which googling seems to have verified as the correct solution, as has the rendered scene (ofcourse, it could still be wrong, also, forgive the messy names and stuff, it's just a WIP proof of concept code) // Draw code int col = 0; int row = 0; for (int i = 0; i < nrOfTiles; ++i) { // XOffset and YOffset are currently hardcoded values, but will represent camera offset combined with HUD offset Point tile = IsoToScreen(col, row, TileWidth / 2, TileHeight / 2, XOffset, YOffset); int x = tile.X; int y = tile.Y; spriteBatch.Draw(_tiles[i], new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); col++; if (col >= Columns) // Columns is the number of tiles in a single row { col = 0; row++; } } // Get selection overlay location (removed check if selection exists for simplicity sake) Point tile = IsoToScreen(_selectedTile.X, _selectedTile.Y, TileWidth / 2, TileHeight / 2, XOffset, YOffset); spriteBatch.Draw(_selectionTexture, new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); // End of draw code public Point IsoToScreen(int isoX, int isoY, int widthHalf, int heightHalf, int xOffset, int yOffset) { Point newPoint = new Point(); newPoint.X = widthHalf * (isoX + isoY) + xOffset; newPoint.Y = heightHalf * (-isoX + isoY) + yOffset; return newPoint; } This code draws the tiles correctly. Now I wanted to do picking to select the tiles. For this, I tried coming up with equations of my own (including reversing the drawing equation) and I tried multiple solutions I found on the internet and none of these solutions worked. Trying out lots of solutions, I came upon one that didn't work, but it seemed like an axis was just inverted. I fiddled around with the equations and somehow managed to get it to actually work (but have no idea why it works), but while it's close, it still doesn't work. I'm not really sure how to describe the behaviour, but it changes the selection at wrong places, while being fairly close (sometimes spot on, sometimes a tile off, I believe never more off than the adjacent tile). This is the code I have for getting which tile coordinates are selected: public Point? ScreenToIso(int screenX, int screenY, int tileHeight, int offsetX, int offsetY) { Point? newPoint = null; int nX = -1; int nY = -1; int tX = screenX - offsetX; int tY = screenY - offsetY; nX = -(tY - tX / 2) / tileHeight; nY = (tY + tX / 2) / tileHeight; newPoint = new Point(nX, nY); return newPoint; } I have no idea why this code is so close, especially considering it doesn't even use the tile width and all my attempts to write an equation myself or use a solution I googled failed. Also, I don't think this code accounts for the area outside the "tile" (the transparent part of the tile image), for which I intend to add a color map, but even if that's true, it's not the problem as the selection sometimes switches on approx 25% or 75% of width or height. I'm thinking I've stumbled upon a wrong path and need to backtrack, but at this point, I'm not sure what to do so I hope someone can shed some light on my error or point me to the right path. It may be worth mentioning that my goal is to not only pick the tile. Each main tile will be divided into 5x5 smaller tiles which won't be drawn seperately from the whole main tile, but they will need to be picked out. I think a color map of a main tile with different colors for different coordinates within the main tile should take care of that though, which would fall within using a color map for the main tile (for the transparent parts of the tile, meaning parts that possibly belong to other tiles).

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • rotate player based off of joystick

    - by pengume
    Hey everyone I have this game that i am making in android and I have a touch screen joystick that moves the player around based on the joysticks position. I cant figure out how to also get the player to rotate at the same angle of the joystick. so when the joystick is to the left the players bitmap is rotated to the left as well. Maybe someone here has some sample code I could look at here is the joysticks class that I am using. `public class GameControls implements OnTouchListener { public float initx = DroidzActivity.screenWidth - 45; //255; // 320 og 425 public float inity = DroidzActivity.screenHeight - 45;//425; // 480 og 267 public Point _touchingPoint = new Point( DroidzActivity.screenWidth - 45, DroidzActivity.screenHeight - 45); public Point _pointerPosition = new Point(DroidzActivity.screenWidth - 100, DroidzActivity.screenHeight - 100); // ogx 220 ogy 150 private Boolean _dragging = false; private boolean attackMode = false; @Override public boolean onTouch(View v, MotionEvent event) { update(event); return true; } private MotionEvent lastEvent; public boolean ControlDragged; private static double angle; public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } // drag drop if (event.getAction() == MotionEvent.ACTION_DOWN) { if ((int) event.getX() > 0 && (int) event.getX() < 50 && (int) event.getY() > DroidzActivity.screenHeight - 160 && (int) event.getY() < DroidzActivity.screenHeight - 0) { setAttackMode(true); } else { _dragging = true; } } else if (event.getAction() == MotionEvent.ACTION_UP) { if(isAttackMode()){ setAttackMode(false); } _dragging = false; } if (_dragging) { ControlDragged = true; // get the pos _touchingPoint.x = (int) event.getX(); _touchingPoint.y = (int) event.getY(); // Log.d("GameControls", "x = " + _touchingPoint.x + " y = " //+ _touchingPoint.y); // bound to a box if (_touchingPoint.x < DroidzActivity.screenWidth - 75) { // og 400 _touchingPoint.x = DroidzActivity.screenWidth - 75; } if (_touchingPoint.x > DroidzActivity.screenWidth - 15) {// og 450 _touchingPoint.x = DroidzActivity.screenWidth - 15; } if (_touchingPoint.y < DroidzActivity.screenHeight - 75) {// og 240 _touchingPoint.y = DroidzActivity.screenHeight - 75; } if (_touchingPoint.y > DroidzActivity.screenHeight - 15) {// og 290 _touchingPoint.y = DroidzActivity.screenHeight - 15; } // get the angle setAngle(Math.atan2(_touchingPoint.y - inity, _touchingPoint.x - initx) / (Math.PI / 180)); // Move the ninja in proportion to how far // the joystick is dragged from its center _pointerPosition.y += Math.sin(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // og 180 70 _pointerPosition.x += Math.cos(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // make the pointer go thru if (_pointerPosition.x > DroidzActivity.screenWidth) { _pointerPosition.x = 0; } if (_pointerPosition.x < 0) { _pointerPosition.x = DroidzActivity.screenWidth; } if (_pointerPosition.y > DroidzActivity.screenHeight) { _pointerPosition.y = 0; } if (_pointerPosition.y < 0) { _pointerPosition.y = DroidzActivity.screenHeight; } } else if (!_dragging) { ControlDragged = false; // Snap back to center when the joystick is released _touchingPoint.x = (int) initx; _touchingPoint.y = (int) inity; // shaft.alpha = 0; } } public void setAttackMode(boolean attackMode) { this.attackMode = attackMode; } public boolean isAttackMode() { return attackMode; } public void setAngle(double angle) { this.angle = angle; } public static double getAngle() { return angle; } }` I should also note that the player has animations based on when he is moving or attacking.

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Best way to load rigid bodies from file

    - by Mel
    I'm trying to switch to bullet for physics simulation. Lemme just say first that I am so pleased with bullet's accuracy and performance. After messing around it for a bit, I'm now trying to load rigid bodies from files. Most of my models are in blender and with some searching, I was able to export them in .bullet format. However, loading the files into bullet doesn't look like an easy task. I've come across this page that points me to a sample application that loads bullet files. But then it goes and says that this loader is just a starting point. Is there any open source library out there that will allow me to load rigid bodies from a file? I don't really wanna spend that much time trying to create my own loader.

    Read the article

  • Need material for character anatomy in a 2D game. Spartan Like, See Picture

    - by Edwin Soho
    I'm creating my art for an 2d based IOS game. I know some basic anatomy as you can see by the picture but I have no idea how I will make draw the pics for animation of the character walking, attacking with his sword and protecting himself with shield. Is there any anatomy reference for 2d game out there, book or anything else? for your information, I did try to find but all of stuff I found are very amateur and incomplete The picture was my attempt of creating a example of the character walking, which I'm not happy with please help, thanks Update: Since I am in a hurry I decided I would copy the anatomy from other 2d games, it is not that clean but at least I wanna be able to start it. The question is still open.

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • ways to program glitch style effects

    - by okkk
    Most tutorials for generating glitch art usually has to do with some form of manipulation of the compression of files. Should my goal instead to replicate the look of these glitches in shaders or is it somehow possible to authentically generate the compression artifacts in real time? Example: This effect which I'm particularly interested is referred to as datamoshing. It does "things" using the p-frames of a video (frames that I think store just the change in pixels). I feel like I need a better understanding of both graphics programming and data-compression.

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >