Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 484/1027 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Mobile Multiplayer games and coping with high latency

    - by spaceOwl
    I'm currently researching regarding a design for an online (realtime) mobile multiplayer game. As such, i'm taking into consideration that latencies (lag) is going to be high (perhaps higher than PC/consoles). I'd like to know if there are ways to overcome this or minimize the issues of high latency? The model i'll be using is peer-to-peer (using Photon cloud to broadcast messages to all other players). How do i deal with a scenario where a message about a local object's state at time t will only get to other players at *t + HUGE_LAG* ?

    Read the article

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • Unity falling body pendulum behaviour

    - by user3447980
    I wonder if someone could provide some guidance. Im attempting to create a pendulum like behaviour in 2D space in Unity without using a hinge joint. Essentially I want to affect a falling body to act as though it were restrained at the radius of a point, and to be subject to gravity and friction etc. Ive tried many modifications of this code, and have come up with some cool 'strange-attractor' like behaviour but i cannot for the life of me create a realistic pendulum like action. This is what I have so far: startingposition = transform.position; //Get start position newposition = startingposition + velocity; //add old velocity newposition.y -= gravity * Time.deltaTime; //add gravity newposition = pivot + Vector2.ClampMagnitude(newposition-pivot,radius); //clamp body at radius??? velocity = newposition-startingposition; //Get new velocity transform.Translate (velocity * Time.deltaTime, Space.World); //apply to transform So im working out the new position based on the old velocity + gravity, then constraining it to a distance from a point, which is the element in the code i cannot get correct. Is this a logical way to go about it?

    Read the article

  • How do I apply 2 rotations about different points to a single primitive using OpenGL

    - by Fenoec
    I'm working on a 2D top-down shooter game that has a rotation feature like Realm Of The Mad God such that if you press e the camera rotates around the character in a clockwise direction and q rotates the camera around the character in a counterclockwise direction. I have this working with my floors and walls by translating to the character, doing the screen rotation, and drawing everything with the character as the origin. The problem arises when I shoot projectiles which need to both rotate around the character and rotate around themselves (shooting uses the mouse cursor so I can shoot at any angle). For example, if the screen is not rotated and I'm shooting rectangular projectiles, I want them to face in the direction I'm shooting (rotation around themselves). However if I only do this rotation, when I then rotate the screen the projectiles will always shoot at the same position because my cursor position does not change. Therefore I need to also either rotate the projectiles around the character or rotate the mouse cursor position to get the correct position (which would then totally screw up all of the collision detection). Likewise if I only do the screen rotation on projectiles, the rectangles will always be facing the same way and they would only look correct if I were shooting straight up or straight down. So my question is, how can I perform 2 rotations on a primitive around 2 different points? The only way I can think of is to translate to the character and do the screen rotation, then somehow calculate the translation required to move back to the middle of the projectile (seeing as how my axes are now rotated) and do its rotation. Or am I thinking about this in the wrong way and there is a different solution to accomplishing this effect?

    Read the article

  • A* how make natural look path?

    - by user11177
    I've been reading this: http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html But there are some things I don't understand, for example the article says to use something like this for pathfinding with diagonal movement: function heuristic(node) = dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) I don't know how do set D to get a natural looking path like in the article, I set D to the lowest cost between adjacent squares like it said, and I don't know what they meant by the stuff about the heuristic should be 4*D, that does not seem to change any thing. This is my heuristic function and move function: def heuristic(self, node, goal): D = 10 dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) def move_cost(self, current, node): cross = abs(current.x - node.x) == 1 and abs(current.y - node.y) == 1 return 19 if cross else 10 Result: The smooth sailing path we want to happen: The rest of my code: http://pastebin.com/TL2cEkeX

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • Calculate vector direction

    - by Starkers
    Is the direction angle always measured from the plus x axis? Does a vector in the +,+ quadrant always have a direction between 0 and 90, and in -,+ between 90 and 180 and in -,- between 180 and 270 and in -,+ between 270 and 360 ? Also, how should we calculate the direction using tan? Would that mean nested if statements to find out what quadrant we're in, and then applying the appropriate "work arounds"? E.g. If we were in the -,+ (like in the diagram) would we find the angle from the + axis would be 90 + tan^-1(y/x), the 90 + only used because we're in the -,+ quadrant. Also, that's just a quick solution, may be off, I just want to know if we use nested if statements to get the angle from the + x axis. Finally, should we find the distance in degrees or radians?

    Read the article

  • SFML: Monster following player on a straight line

    - by user3504658
    I've searched for this and found a few topics , usually they used a function normalize and using simple vector subtracting which is ok , but how should I do it in sfml ? Instead of using: Movement = p.position() - m.position(); p is the player and m is the monster I used something like this to move on a straight line: sf::Vector2f Tail(0,0); if((mPlayer.getPosition().y - mMonster.GetInstance().getPosition().y) >= (mPlayer.getPosition().x - mMonster.GetInstance().getPosition().x)){ //sf::Vector2f Tail(0,0); Tail.x = mPlayer.getPosition().x - mMonster.GetInstance().getPosition().x; } else if((mPlayer.getPosition().y - mMonster.GetInstance().getPosition().y) <= (mPlayer.getPosition().x - mMonster.GetInstance().getPosition().x)){ //sf::Vector2f Tail(0,0); Tail.y = mPlayer.getPosition().y - mMonster.GetInstance().getPosition().y; } if(!MonsterCollosion()) mMonster.Move(Tail * (TimePerFrame.asSeconds() * 1/2 ) ); It works ok if the the height = the width for the game window, although I think it's not the best looking game when it comes to a moving monster, since it starts fast and then it gets slower so what do you guys advise me to do ?

    Read the article

  • Simple rendering produces minor stutter

    - by Ben
    For some reason, this game loop renders the movement of a simple rectangle with no stuttering. double currTime; double prevTime = System.nanoTime() / NANO_TO_SEC; double FPSTIMER = System.nanoTime(); double maxTimeDiff = 100.0 / 1000.0; double delta = 1.0 / 60.0; int processes = 0, frames = 0; while(true){ currTime = System.nanoTime() / NANO_TO_SEC; if(currTime - prevTime > maxTimeDiff) prevTime = currTime; if(currTime >= prevTime){ process(); processes++; prevTime += delta; if(currTime < prevTime){ render(); frames++; } } else{ try{ Thread.sleep((long) (1000 * (prevTime - currTime))); } catch(Exception e){} } if(System.nanoTime() - FPSTIMER > 1000000000.0){ System.out.println("Process: " + (1000 / processes) + "ms FPS: " + (1000 / frames) + "ms"); processes = frames = 0; FPSTIMER += 1000000000.0; } } But for this game loop, I get really minor stuttering where the movement does not look smooth. long prevTime = System.currentTimeMillis(); long prevRenderTime = 0; long currRenderTime = 0; long delta = 0; long msPerTick = 1000 / 60; int frames = 0; int ticks = 0; double FPSTIMER = System.currentTimeMillis(); while (true){ long currTime = System.currentTimeMillis(); delta += (currTime - prevTime) / msPerTick; prevTime = currTime; while (delta >= 1){ ticks++; process(); delta -= 1; } prevRenderTime = System.currentTimeMillis(); render(); frames++; currRenderTime = System.currentTimeMillis(); try{ Thread.sleep((long) ((1000 / FPS) - (currRenderTime - prevRenderTime))); } catch(Exception e){} if(System.currentTimeMillis() - FPSTIMER > 1000.0){ System.out.println("Process: " + (1000.0 / ticks) + "ms FPS: " + (1000.0 / frames) + "ms"); ticks = frames = 0; FPSTIMER += 1000.0; } Is there any critical difference that I'm missing here? The one thing I noticed is that if I uncap the fps for the second game loop, the stuttering goes away. It doesn't make sense to me. Also, the second game loop came from Notch's Minicraft code with just my thread sleeping code added in.

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

  • How does Minecraft render its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Initial direction of intersection between two moving vehicles?

    - by Larolaro
    I'm working with a bit of projectile prediction for my AI and I'm looking for some ideas, any input? If a blue vehicle is moving in a direction at a constant speed of X m/s and a stationary orange vehicle has a rocket that travels Y m/s, which initial direction would the orange vehicle have to fire the rocket for it to hit the blue vehicle at the earliest time in the future? Thanks for reading!

    Read the article

  • Circle physics and collision using vectors

    - by Joe Hearty
    This is a problem I've been having, When making a set number of filled circles at random locations on a JPanel and applying a gravity (a negative change in the y), each of the circles collide. I want them to have collision detection and push in the opposite direction using vectors but I don't know how to apply that to my scenario could someone help? public void drawballs(Graphics g){ g.setColor (Color.white); //displays circles for(int i = 0; i<xlocationofcircles.length-1; i++){ g.fillOval( (int) xlocationofcircles[i], (int) (ylocationofcircles[i]) ,16 ,16 ); ylocationofcircles[i]+=.2; //gravity if(ylocationofcircles[i] > 550) //stops gravity at bottom of screen ylocationofcircles[i]-=.2; //Check distance between circles(i think..) float distance =(xlocationofcircles[i+1]-xlocationofcircles[i]) + (ylocationofcircles[i+1]-xlocationofcircles[i]); if( Math.sqrt(distance) <16) ...

    Read the article

  • How to alter image pixels of a wild life bird?

    - by NoobScratcher
    Hello so I was hoping someone knew how to move or change color and position actual image pixels and could explain and show the code to do so. I know how to write pixels on a surface or screen-surface usigned int *ptr = static_cast <unsigned int *> (screen-pixels); int offset = y * (screen->pitch / sizeof(unsigned int)); ptr[offset + x] = color; But I don't know how to alter or manipulate a image pixel of a png image my thoughts on this was How do I get the values and locations of pixels and what do I have to write to make it happen? Then how do I actually change the values or locations of those gotten pixels and how do I make that happen? any ideas tip suggestions are also welcome! int main(int argc , char *argv[]) { SDL_Surface *Screen = SDL_SetVideoMode(640,480,32,SDL_SWSURFACE); SDL_Surface *Image; Image = IMG_Load("image.png"); bool done = false; SDL_Event event; while(!done) { SDL_FillRect(Screen,NULL,(0,0,0)); SDL_BlitSurface(Image,NULL,Screen,NULL); while(SDL_PollEvent(&event)) { switch(event.type) { case SDL_QUIT: return 0; break; } } SDL_Flip(Screen); } return 0; }

    Read the article

  • Cocos2d update leaking memory

    - by Andrey Chernukha
    I have a weird issue - my app is leaking memory on device only, not on a simulator. It is leaking if i schedule update method anywhere, on any scene. It is leaking despite update method is empty, there's nothing inside it except NSLog. How can it be? I have even scheduled update on the very first scene where it seems there's nothing to leak, and scheduled another empty and it's leaking or not leaking but allocating something, the result is the same - the volume of the memory consumed is increasing and my app is crashing soon. I can detect the leakage via using Instruments-Memory-Activity Monitor or with help of following function: void report_memory(void) { struct task_basic_info info; mach_msg_type_number_t size = sizeof(info); kern_return_t kerr = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size); if( kerr == KERN_SUCCESS ) { NSLog(@"Memory in use (in bytes): %u", info.resident_size); } else { NSLog(@"Error with task_info(): %s", mach_error_string(kerr)); } } Can anyone explain me what's going on?

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • How to procedurally (create) grow an artistic (2D) tree in real-time (L-System?).

    - by lalan
    Recently I programmed an L-system module, It got me interested further. I am a Plants vs Zombies junkie as well, really liked the concept of Tree of Wisdom. Would love to create similar procedural art just for fun and learn more. Question: How should I approach the process of creating an artistic tree (2d perhaps with fixed camera/perspective) dynamically? Ideally I would like to start with a plant (only a stem with a leaf) and grow it dynamically using some influence (input/user action) over its structure. These influences may result in different type of branching, curves in branches, its spread, location of fruits, color of flowers, etc. Want it to be really full of life/spirit. :) Plants vs Zombies: Tree of wisdom It would be great to dynamically grow a similar tree, but with lot more variation and animations happening. My Background: Student / Programmer, have used few game engines (Ogre3d, cocos2d, unity). Haven't really programmed directly using openGL, trying to fix that :). I am ready to spend considerable time, Please let me know about the APIs? and how would an expert like you would take on this problem? Why 2D? I think it's easier to solve the problem only considering 2 dimensions. Artistic inspirations: Only the tree, with fruits and leaves, without the shrubs at the bottom The large tree (visible branches, green leaves, flowers, fruits, etc) on the left, behind monkey. PixelJunk's Eden (Art style inspiration). Procedurally Generated Apple Tree using Fractals Please let me know if it was easy for you to understand the question, I may elaborate further. I hope a discussion of various approach would be helpful for everyone. You guys are awesome.

    Read the article

  • Javascript A* path finding

    - by Veyha
    I am trying to learn A* path finding. I am using this library - https://github.com/qiao/PathFinding.js But there is one thing I don't understand how to do. To find a path from player.x/player.y (player.x and player.y are both 0) to 10/10 I use this code var path = finder.findPath(player.x, player.y, 10, 10, grid); This gives an array of where I need to move, but how do I apply this array to my player.x and player.y? The path structure looks like this path = [[0, 0], [1, 0], [1, 1], ..., [10, 10]]

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Managing text-maps in a 2D array on to be painted on HTML5 Canvas

    - by weka
    So, I'm making a HTML5 RPG just for fun. The map is a <canvas> (512px width, 352px height | 16 tiles across, 11 tiles top to bottom). I want to know if there's a more efficient way to paint the <canvas>. Here's how I have it right now. How tiles are loaded and painted on map The map is being painted by tiles (32x32) using the Image() piece. The image files are loaded through a simple for loop and put into an array called tiles[] to be PAINTED on using drawImage(). First, we load the tiles... and here's how it's being done: // SET UP THE & DRAW THE MAP TILES tiles = []; var loadedImagesCount = 0; for (x = 0; x <= NUM_OF_TILES; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "js/tiles/t" + x + ".png"; imageObj.onload = function () { console.log("Added tile ... " + loadedImagesCount); loadedImagesCount++; if (loadedImagesCount == NUM_OF_TILES) { // Onces all tiles are loaded ... // We paint the map for (y = 0; y <= 15; y++) { for (x = 0; x <= 10; x++) { theX = x * 32; theY = y * 32; context.drawImage(tiles[5], theY, theX, 32, 32); } } } }; tiles.push(imageObj); } Naturally, when a player starts a game it loads the map they last left off. But for here, it an all-grass map. Right now, the maps use 2D arrays. Here's an example map. [[4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 1, 1, 1, 1], [1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 11, 11, 11, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 1, 1, 1, 1, 1, 1, 1, 13, 13, 13, 13, 13, 1], [1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1, 1, 1]]; I get different maps using a simple if structure. Once the 2d array above is return, the corresponding number in each array will be painted according to Image() stored inside tile[]. Then drawImage() will occur and paint according to the x and y and times it by 32 to paint on the correct x-y coordinate. How multiple map switching occurs With my game, maps have five things to keep track of: currentID, leftID, rightID, upID, and bottomID. currentID: The current ID of the map you are on. leftID: What ID of currentID to load when you exit on the left of current map. rightID: What ID of currentID to load when you exit on the right of current map. downID: What ID of currentID to load when you exit on the bottom of current map. upID: What ID of currentID to load when you exit on the top of current map. Something to note: If either leftID, rightID, upID, or bottomID are NOT specific, that means they are a 0. That means they cannot leave that side of the map. It is merely an invisible blockade. So, once a person exits a side of the map, depending on where they exited... for example if they exited on the bottom, bottomID will the number of the map to load and thus be painted on the map. Here's a representational .GIF to help you better visualize: As you can see, sooner or later, with many maps I will be dealing with many IDs. And that can possibly get a little confusing and hectic. The obvious pros is that it load 176 tiles at a time, refresh a small 512x352 canvas, and handles one map at time. The con is that the MAP ids, when dealing with many maps, may get confusing at times. My question Is this an efficient way to store maps (given the usage of tiles), or is there a better way to handle maps? I was thinking along the lines of a giant map. The map-size is big and it's all one 2D array. The viewport, however, is still 512x352 pixels. Here's another .gif I made (for this question) to help visualize: Sorry if you cannot understand my English. Please ask anything you have trouble understanding. Hopefully, I made it clear. Thanks.

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • Tips for building an AI for a 2D racing game

    - by declique
    I have a school project to build an AI for a 2D racing game in which it will compete with several other AIs (no collision). We are given a black and white bitmap image of the racing track, we are allowed to choose basic stats for our car (handling, acceleration, max speed and brakes) after we receive the map. The AI connects to the game's server and gives to it several times a second numbers for the current acceleration and steering. The language I chose is C++, by the way. The questions are: What is the best strategy or algorithm (since I want to try and win)? I currently have in mind some ideas found on the net and one or two of my own, but I would like before I start to code that my perspective is one of the best. What good books are there on that matter? What sites should I refer to?

    Read the article

  • Adding 'swerve' to a direction

    - by Skoder
    Hey. I'm not much of a maths expert, so this is probably quite straight forward. I was playing a soccer flash game where you take free kicks. You provide Power, Swerve and Direction. I'm reading up on vectors and such so I can use the direction and power information to shoot the ball with the correct velocity. What I don't understand is how the 'Swerve' information is used. What formula connects the Swerve information with the Direction and Power? (This is all in 2D) Thanks for any advice.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >