Search Results

Search found 17924 results on 717 pages for 'game center'.

Page 348/717 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • C# Collision test of a ship and asteriod, angle confusion

    - by Cherry
    We are trying to to do a collision detection for the ship and asteroid. If success than it should detect the collision before N turns. However it is confused between angle 350 and 15 and it is not really working. Sometimes it is moving but sometime it is not moving at all. On the other hand, it is not shooting at the right time as well. I just want to ask how to make the collision detection working??? And how to solve the angle confusion problem? // Get velocities of asteroid Console.WriteLine("lol"); // IF equation is between -2 and -3 if (equation1a <= -2) { // Calculate no. turns till asteroid hits float turns_till_hit = dx / vx; // Calculate angle of asteroid float asteroid_angle_rad = (float)Math.Atan(Math.Abs(dy / dx)); float asteroid_angle_deg = (float)(asteroid_angle_rad * 180 / Math.PI); float asteroid_angle = 0; // Calculate angle if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { asteroid_angle = asteroid_angle_deg; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { asteroid_angle = (360 - asteroid_angle_deg); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 + asteroid_angle_deg); } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 - asteroid_angle_deg); } // IF turns till asteroid hits are less than 35 if (turns_till_hit < 50) { float angle_between = 0; // Calculate angle between if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { angle_between = (360 - Math.Abs(ship_angle - asteroid_angle)); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } // If angle less than 0, add 360 if (angle_between < 0) { //angle_between %= 360; angle_between = Math.Abs(angle_between); } // Calculate no. of turns to face asteroid float turns_to_face = angle_between / 25; if (turns_to_face < turns_till_hit) { float ship_angle_left = ShipAngle(ship_angle, "leftKey", 1); float ship_angle_right = ShipAngle(ship_angle, "rightKey", 1); float angle_between_left = Math.Abs(ship_angle_left - asteroid_angle); float angle_between_right = Math.Abs(ship_angle_right - asteroid_angle); if (angle_between_left < angle_between_right) { leftKey = true; } else if (angle_between_right < angle_between_left) { rightKey = true; } } if (angle_between > 0 && angle_between < 25) { spaceKey = true; } } }

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • Well-tested libraries for player ratings?

    - by Lucky
    It's common in games to implement some sort of numerical ranking system -- the ELO system is usually used in chess. I could implement this system naively using Wikipedia's descriptions, but I suspect that this would open up a whole box of problems that have already been solved: rating inflation, etc -- for instance, the ELO system has a K constant that's 'fudged' according to rating, duration, pairings, statistics, ... What are some libraries (I'm looking at Python, but anything is okay) that implements rating systems? It also doesn't have to be ELO.

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • C# XNA Make rendered screen a texture2d

    - by redcodefinal
    I am working on a cool little city generator which makes cities in the isometric perspective. However, a problem arose where if the grid size was over a certain limit it would have awful lag. I found the main problem to be in the draw method. So I took the precautionary step of rendering only items that were onscreen. This fixed the lag but, not by much. The idea I have is to render the frame once and take a snapshot. Then, display that as a texture2d on screen. This way I don't have to render 1,000,000 objects every frame since they don't change anyways. TL;DR - I want to Take a snapshot of an already rendered frame Turn it into a Texture2D Render that to the screen instead of all the objects. Any help appreciated.

    Read the article

  • How do I prevent a KActor from changing the orientation of its Z-Axis?

    - by Almo
    So I have an object that inherits from KActor that I would like to behave as a dynamic physics object, but I want its Z-Axis to remain upright, but very stiffly. I've tried the bStayUpright that triggers the "Stay Upright Spring". The problem is, it's a spring, and the object in question oscillates into position when I want it to remain oriented properly without wobbling. In the image above, the yellow block has fallen onto the gray box, and it is currently pivoting about the contact point as it tries to right itself. Should I be tweaking the StayUprightMaxTorque and StayUprightTorqueFactor parameters, or should I be using a Constraint of some sort?

    Read the article

  • Creating Rectangle-based buttons with OnClick events

    - by Djentleman
    As the title implies, I want a Button class with an OnClick event handler. It should fire off connected events when it is clicked. This is as far as I've made it: public class Button { public event EventHandler OnClick; public Rectangle Rec { get; set; } public string Text { get; set; } public Button(Rectangle rec, string text) { this.Rec = rec; this.Text = text; } } I have no clue what I'm doing with regards to events. I know how to use them but creating them myself is another matter entirely. I've also made buttons without using events that work on a case-by-case basis. So basically, I want to be able to attach methods to the OnClick EventHandler that will fire when the Button is clicked (i.e., the mouse intersects Rec and the left mouse button is clicked).

    Read the article

  • A Star Path finding endless loop

    - by PoeHaH
    I have implemented A* algorithm. Sometimes it works, sometimes it doesn't, and it goes through an endless loop. After days of debugging and googling, I hope you can come to the rescue. This is my code: The algorythm: public ArrayList<Coordinate> findClosestPathTo(Coordinate start, Coordinate goal) { ArrayList<Coordinate> closed = new ArrayList<Coordinate>(); ArrayList<Coordinate> open = new ArrayList<Coordinate>(); ArrayList<Coordinate> travelpath = new ArrayList<Coordinate>(); open.add(start); while(open.size()>0) { Coordinate current = searchCoordinateWithLowestF(open); if(current.equals(goal)) { return travelpath; } travelpath.add(current); open.remove(current); closed.add(current); ArrayList<Coordinate> neighbors = current.calculateCoordAdjacencies(true, rowbound, colbound); for(Coordinate n:neighbors) { if(closed.contains(n) || map.isWalkeable(n)) { continue; } int gScore = current.getGvalue() + 1; boolean gScoreIsBest = false; if(!open.contains(n)) { gScoreIsBest = true; n.setHvalue(manhattanHeuristic(n,goal)); open.add(n); } else { if(gScore<n.getGvalue()) { gScoreIsBest = true; } } if(gScoreIsBest) { n.setGvalue(gScore); n.setFvalue(n.getGvalue()+n.getHvalue()); } } } return null; } What I have found out is that it always fails whenever there's an obstacle in the path. If I'm running it on 'open terrain', it seems to work. It seems to be affected by this part: || map.isWalkeable(n) Though, the isWalkeable function seems to work fine. If additional code is needed, I will provide it. Your help is greatly appreciated, Thanks :)

    Read the article

  • Any way to edit Warcraft MDX or MDL Animated models?

    - by Aralox
    I have been searching for a while for a way to get an animated mdl or mdx model into any 3D animating software (such as Blender), but so far have not had any success. I found a few methods of getting textured static mdx or mdl models into Blender/Milkshape/Hexagon, but no one seems to have written an importer that deals with the mdl/mdx model's keyframe animation. On that note, if anyone knows of a way of importing a keyframe-animated 3DS model into Blender, me and alot of people would appreciate it if you could let us know. Thanks for any help! :) PS: For anyone curious about static MDL or MDX - Blender, see here: http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/WarCraft_MDL

    Read the article

  • Android - Efficient way to draw tiles in OpenGL ES

    - by Maecky
    Hi, I am trying to write efficient code to render a tile based map in android. I load for each tile the corresponding bitmap (just one time) and then create the according tiles. I have designed a class to do this: public class VertexQuad { private float[] mCoordArr; private float[] mColArr; private float[] mTexCoordArr; private int mTextureName; private static short mCounter = 0; private short mIndex; As you can see, each tile has it's x,y location, a color array, texture coordinates and a texture name. Now, I want to render all my created tiles. To reduce the openGL api calls (I read somewhere that the state changes are costly and therefore I want to keep them to a minimum), I first want to hand ALL the coordinate-arrays, color-arrays and texture-coordinates over to OpenGL. After that I run two for loops. The first one iterates over the textures and binds the texture. The second for loop iterates over all Tiles and puts all tiles with the corresponding texture into an IndexBuffer. After the second for loop has finished, I call gl.gl_drawElements() whith the corresponding index buffer, to draw all tiles with the texture associated. For the next texture I do the same again. Now I run into some problems: Allocating and filling the FloatBuffers at the start of each rendering cycle costs very much time. I just run a test, where i wanted to put 400 coordinates into a FloatBuffer which took me about 200ms. My questions now are: Is there a better way, handling the coordinate and color structures? How is this correctly done, this is obviously not the optimal way? ;) thanks in advance, regards Markus

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Random Position between ranges.

    - by blakey87
    Does anyone have a good algorithm for generating a random y position for spawning a block, which takes into account a minimum and maximum height, allowing player to to jump on the block. Blocks will continually be spawned, so the player must always be able to jump onto the next block, bearing in mind the minimum position which would be the ground, and the maximum which would the players jump height bearing in mind the ceiling

    Read the article

  • Why are trees shining in background?

    - by Kinected
    Currently I am creating a forest scene in the dark, and the trees are shining far away, but when I get close they are fine. I have the shaders set to "Nature/Tree Soft Occlusion [bark/leaves]", but they are still rendering strange far away, but close they are fine. I tried placing the trees in a folder named "Ambient-Occlusion" like said here, but no luck. Also fog is turned off. Thanks in advance.

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • How can I make Maya export a mesh as double-sided?

    - by bobobobo
    I'm exporting from Maya 2009 to OBJ. The mesh I'm exporting has in it's Render Stats "Double Sided" checked, but when the polygon is exported, only a single side is actually exported. What really needs to happen is for each polygon that is double sided, two polygons need to be exported, facing in opposite directions.. I can do this manually, but is there a way to make the OBJ exporter do it for me?

    Read the article

  • PNG file loading error in ImageMagick

    - by khanhhh89
    I'm trying to understand the tutorial 16 at http://ogldev.atspace.co.uk, which requires the image processing library ImageMagick. But when I run the tutorial, I encountered an following error: freeglut: failed to change scree settings Error loading textures 'test.png': no decode delegates for this image format 'C:/../appdata/magick-6024a_cIJcw90t-j'@error/constitute.c/ReadImage/552 I searched for google and found that my ImageMagick library do not have PNG Delegaes, but when I checked for the information of ImageMagick, it sees PNG in its delegate lists. Command line: convert -configure Result: LIB_VERSION 0x687 DELEGATES: bzlib, freetype, jpeg, jp2, lcms, png, tiff, x11, xml, wmf, zlib Could you explain to me this error, thanks so much!

    Read the article

  • Exporting UV coords from Blender

    - by Soapy
    So I have searched on google and various other websites but I've not found an answer. The only ones I did find did not work. So my question is how do I get UV coords from blender (2.63)? Currently I'm writing my own custom file exporter, and so far have managed to export vertices and their normals. Is there a way to export the UV coords? N.B. I'm currently try to figure it out using a simple cube that is unwrapped and has a texture applied to it.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • How to make my simple round sprite look right in XNA

    - by Joshua Perina
    Ok, I'm very new to graphics programming (but not new to coding). I'm trying to load a simple image in XNA which I can do fine. It is a simple round circle which I made in photoshop. The problem is the edges show up rough when I draw it on the screen even with the exact size. The anti-aliasing is missing. I'm sure I'm missing something very simple: GraphicsDevice.Clear(Color.Black); // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(circle, new Rectangle(10, 10, 10, 10), Color.White); spriteBatch.End(); Couldn't post picture because I'm a first time poster. But my smooth png circle has rough edges. So I found that if I added: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied); I can get a smooth image when the image is the same size as the original png. But if I want to scale that image up or down then the rough edges return. How do I get XNA to smoothly resize my simple round image to a smaller size without getting the rough edges?

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

  • Javascript A* path finding

    - by Veyha
    I am trying to learn A* path finding. I am using this library - https://github.com/qiao/PathFinding.js But there is one thing I don't understand how to do. To find a path from player.x/player.y (player.x and player.y are both 0) to 10/10 I use this code var path = finder.findPath(player.x, player.y, 10, 10, grid); This gives an array of where I need to move, but how do I apply this array to my player.x and player.y? The path structure looks like this path = [[0, 0], [1, 0], [1, 1], ..., [10, 10]]

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >