Search Results

Search found 1313 results on 53 pages for 'distance'.

Page 35/53 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Disappearring instances of VertexPositionColor using MonoGame

    - by Rosko
    I am a complete beginner in graphics developing with XNA/Monogame. Started my own project using Monogame 3.0 for WinRT. I have this unexplainable issue that some of the vertices disappear while doing some updates on them. Basically, it is a game with balls who collide with the walls and with each other and in certain conditions they explode. When they explode they disappear. Here is a video demonstrating the issue. I used wireframes so that it is easier to see how vertices are missing. The perfect exploding balls are the ones which are result by user input with mouse clicking. Thanks for the help. The situations is: I draw user primitives with triangle strips using like this graphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, circleVertices, 0, primitiveCount); All of the primitives are in the z-plane (z = 0), I thought that it is the culling in action. I tried setting the culling mode to none but it did not help. Here is the code responsible for the explosion private void Explode(GameTime gameTime, ref List<Circle> circles) { if (this.isExploding) { for (int i = 0; i < this.circleVertices.Length; i++) { if (this.circleVertices[i] != this.circleCenter) { if (Vector3.Distance(this.circleVertices[i].Position, this.circleCenter.Position) < this.explosionRadius * precisionCoefficient) { var explosionVector = this.circleVertices[i].Position - this.circleCenter.Position; explosionVector.Normalize(); explosionVector *= explosionSpeed; circleVertices[i].Position += explosionVector * (float)gameTime.ElapsedGameTime.TotalSeconds; } else { circles.Remove(this); } } } } } I'd be really greatful if anyone has suggestions about how to fix this issue.

    Read the article

  • Microsoft WPC 12&ndash;Predictions

    - by D'Arcy Lussier
    Let me start by saying I have absolutely no inside knowledge, neither through the MVP program or any other means, that is fuelling what I’m about to write. This is entirely conjecture fuelled by speculation and too much Soporro beer at a fantastic Japanese restaurant tonight. Still, I present to you… D’Arcy’s Worldwide Partner Conference 2012 Predictions!!! So what can we expect to be announced at this year’s WPC? Much more than last year I’m hoping! Last year was sort of encouraging the troops to carry on with the Windows 7 messaging even with Windows 8 looming in the distance. It also showed Microsoft’s slant towards Private Cloud in addition to Azure. This year, we’re going to see a shift to a battle cry – Windows 8 is Coming, Windows 8 is Coming! I expect we’re going to hear an RTM date for Windows 8 from Steve Ballmer tomorrow, in addition to dates surrounding Windows Server 2012. We’ll also hear some announcement around Windows Phone 8, but I’m not really sure what – that whole piece is still quite muddy; are we going to actually *see* Windows Phone 8 devices this week? That would be great, but I imagine those types of announcements might be left for Build. Speaking of Build, I’m expecting an announcement on a date for a Build conference this Fall, probably late October. If any announcements are going to be made around Office 15, the schedule isn’t hinting at it. In fact, other than Office 365 there’s not much mention of Office in the conference sessions – either a red herring, or telling that Microsoft has another announcement coming later. The tagline of the conference is “A New Era. Together.” It’s obvious Microsoft is wanting to leverage WPC to rally their partners to carry the Windows 8 banner into the field of battle this fall when it ships. D

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • How do I separate model positions from view positions in MVC?

    - by tieTYT
    Using MVC in games (as opposed to web apps) always confuses me when it comes to the view. How am I supposed to keep the model agnostic of how the view is presenting things? I always end up giving the Model a position that holds x and y but invariably, these values end up being in units of pixels and that feels wrong. I can see the advantage* of avoiding that but how am I supposed to? This idea was suggested: Don't think of it in units of pixels, think of them in arbitrary distance units that just happen map to pixels at a 1:1 ratio. Oh, the resolution is half of what it was? We are now taking the x/y coordinates at 50% value for screen display, and your spells casting range is still 300 units long, which now is 150 pixels. But those numbers conveniently work out. What do I do if the numbers divide in such a way that I get decimal places? Floating points are unsafe. I think allowing decimal places would eventually cause really weird bugs in my game. *It'd let me write the model once and write different views depending on the device.

    Read the article

  • Dynamic Jump spot

    - by Pasquale Sada
    I have an initial velocity V(Vx,Vy,VZ) and a spot where he stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacle around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized;

    Read the article

  • What's the best way to move cars along roads

    - by David Thielen
    I am implementing car movement game (sort-of like Locomotion). So 60 times a second I have to advance the movement of each car. The problem is I have to look ahead to see if there is a slower car, stop sign, or red light ahead. And then slow down appropiately. I also want to have the cars take time to go from stopped to full speed and again to slow down. I'm not implementing full-blown physics, but just a tick by tick speed up/slow down as that provides most of the realism to match what people expect to see. The best I've come up with is to walk out the full distance the car would travel of it was slowing to a stop and see if anywhere along that path it needed to slow down or stop. And then move it forward appropiately. I am moving the cars 60 times a second so I need this to be fast. And walking out that whole path each tick strikes me as processor intensive. What's the best way to do this?

    Read the article

  • Implementing AS 2.0 with FSM?

    - by Up2u
    i have seen many of references of AI and FSM like : http://www.richardlord.net/blog/fini...n-actionscript and sadly im still can't understand the point of the FSM on AS2.0 is it a must to create a class of each state ? i have a project of game and also it has an AI, the AI has 3 state n i said the state is distanceCheck, ChaseTarget, and Hit the target, the game that i create is FPS game and play via by mouse so what i mean is i have create an AI ( and is success ) but i want to convert it to FSM method ... i create : function of CheckDistanceState() and in that function i have to locked the target with an array, and sort it with the nearest distance and locked it and it trigger the function ChaseState(), and in the ChaseState() i insert the Hit() function to destroy the enemy, the 3 function that i created , i call it in the AI_cursor.onEnterframe, ( FPS game that only have a cursor in stage ) is there any chance to implement FSM to my code without to create a class ?? from what i read before , to create a class mean to create an external code outside of the frame ( i used to code in frame) and i stil dont understand about it. sorry if my explaination not clear ...

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • Yelp, Google's API for restaurants help

    - by chris
    Ok I have looked into this, and I'm not sure if anyone else has experience with it. I'm having termendous difficulties with Yelp and Google's API. To help explain what I am trying to do here is the concept of the website. We would have to pull restaurants based on user distance, and then randomize them based on quality of restaurant based on feedback from review websites (Yelp, Google, urbanspoon, zagat, opentable, kudzu, yahoo - doesn't have to be from all), and feedback from our users (on results page for the random restaurant users can select good recommendation/bad recommendation). There’s a lot we could calculate for our formula. Things that will dictate your results will be based on if you’re at home or work. If you’re at home you will have more time to drive out to the city to grab some dinner or lunch. If you’re at work we would have to recommend restaurants nearby as lunch is typically 30 minutes to a hour. A 30 minute lunch would require take out most likely or quick service. A hour lunch break you could dine in at a local fine dining restaurant. So in a nutshell, user comes to website. Select if they're at home or work, click submit and we will have a random restaurant selected for them to go. If they don't like it they can click retry and a new restaurant can show. The issue I am having is using the API to gather all the restaurants in the US. I know it can be done because there are similiar websites/apps that pull restaurants that are closest to you such as Ness, Alfred, and I believe there's two more but I can't remember the names. Anyone know if this can be accomplish?

    Read the article

  • Particle trajectory smoothing: where to do the simulation?

    - by nkint
    I have a particle system in which I have particles that are moving to a target and the new targets are received via network. The list of new target are some noisy coordinates of a moving target stored in the server that I want to smooth in the client. For doing the smoothing and the particle I wrote a simple particle engine with standard euler integration model. So, my pseudo code is something like that: # pseudo code class Particle: def update(): # do euler motion model integration: # if the distance to the target is more than a limit # add a new force to the accelleration # seeking the target, # and add the accelleration to velocity # and velocity to the position positionHistory.push_back(position); if history.length > historySize : history.pop_front() class ParticleEngine: particleById = dict() # an associative array # where the keys are the id # and particle istances are sotred as values # this method is called each time a new tcp packet is received and parsed def setNetTarget(int id, Vec2D new_target): particleById[id].setNewTarget(new_target) # this method is called each new frame def draw(): for p in particleById.values: p.update() beginVertex(LINE_STRIP) for v in p.positionHistory: vertex(v.x, v.y) endVertex() The new target that are arriving are noisy but setting some accelleration/velocity parameters let the particle to have a smoothed trajectories. But if a particle trajectory is a circle after a while the particle position converge to the center (a normal behaviour of euler integration model). So I decided to change the simulation and use some other interpolation (spline?) or smooth method (kalman filter?) between the targets. Something like: switch( INTERPOLATION_MODEL ): case EULER_MOTION: ... case HERMITE_INTERPOLATION: ... case SPLINE_INTERPOLATION: ... case KALMAN_FILTER_SMOOTHING: ... Now my question: where to write the motion simulation / trajectory interpolation? In the Particle? So I will have some Particle subclass like ParticleEuler, ParticleSpline, ParticleKalman, etc..? Or in the particle engine?

    Read the article

  • Why isn't my other two constant buffers being updated to the shader?

    - by Paul Ske
    I posted previously before about my two dynamic buffers not being dynamically updating the constant shader. The tessellation buffer isn't working because I have to manually update the tessellation factor inside the hull shader. I believe the camera position isn't updating either because when I perform distance adaptation the far edges are more tessellated then the what's truly in front of the camera. I have all the buffers set to dynamic. Inside the render loop I have them set as: ID3D11Buffer *multiBuffers[3]; devcon->VSSetConstantBuffers(0,3,multiBuffers); ... devcon->DSSetConstantBuffers(0,3,multiBuffers); I only got that from a directX Sample. Inside the shader file I have the three cbuffer structs. cbuffer ConstantBuffer { float4x4 WorldMatrix; float4x4 viewMatrix; float4x4 projectionMatrix; float4x4 modelWorldMatrix; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } Am I missing something or would anyone know why wouldn't my buffers update to the shader file?

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • More efficient in range checking

    - by Mob
    I am going to use a specific example in my question, but overall it is pretty general. I use java and libgdx. I have a ship that moves through space. In space there is debris that the ship can tractor beam in and and harvest. Debris is stored in a list, and the object contains it own x and y values. So currently there is no way to to find the debris's location without first looking at the debris object. Now at any given time there can be a huge (1000+) amount of debris in space, and I figure that calculating the distance between the ship and every single piece of debris and comparing it to maximum tractor beam length is rather inefficient. I have thought of dividing space into sectors, and have each sector contain a list of every object in it. This way I could only check nearby sectors. However this essentially doubles memory for the list. (I would reference the same object so it wouldn't double overall. I am not CS major, but I doubt this would be hugely significant.) This also means anytime an object moves it has to calculate which sector it is in, again not a huge problem. I also don't know if I can use some sort of 2D MAP that uses x and y values as keys. But since I am using float locations this sounds more trouble than its worth. I am kind of new to programming games, and I imagined there would be some eloquent solution to this issue.

    Read the article

  • how to calculate intersection time and place of multiple moving arcs

    - by user20733
    I have rocks orbiting moons, moons orbiting planets, planets orbiting suns, and suns orbiting black holes, and the current system could have many many layers of orbitage. the position of any object is a function of time and relative to the object it orbits. (so far so good). now I want to know for a given 2 objects(A,B), a start time and a speed, how can I work out the when and where to go. I can work out where A and B is given a time. so i just need. 1: direction to travel in from A to B(remember B is moving(not in a straight line)) 2: Time to get to b in a straight line. travel must be in a straight line with the shortest possible distance. as an extension to this question, how will i know if its better to wait, EG is it faster to stay on object A and wait for a hour when the objects may be closer, than to set off from A to B at the start. Cheers, it hurt my brain.

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • XNA- Transforming children

    - by user1806687
    So, I have a Model stored in MyModel, that is made from three meshes. If you loop thrue MyModel.Meshes the first two are children of the third one. And was just wondering, if anyone could tell me where is the problem with my code. This method is called whenever I want to programmaticly change the position of a whole model: public void ChangePosition(Vector3 newPos) { Position = newPos; MyModel.Root.Transform = Matrix.CreateScale(VectorMathHelper.VectorMath(CurrentSize, DefaultSize, '/')) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Up, MathHelper.ToRadians(Rotation.Y)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Right, MathHelper.ToRadians(Rotation.X)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Forward, MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(Position); Matrix[] transforms = new Matrix[MyModel.Bones.Count]; MyModel.CopyAbsoluteBoneTransformsTo(transforms); int count = transforms.Length - 1; foreach (ModelMesh mesh in MyModel.Meshes) { mesh.ParentBone.Transform = transforms[count]; count--; } } This is the draw method: foreach (ModelMesh mesh in MyModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.View = camera.view; effect.Projection = camera.projection; effect.World = mesh.ParentBone.Transform; effect.EnableDefaultLighting(); } mesh.Draw(); } The thing is when I call ChangePosition() the first time everything works perfectlly, but as soon as I call it again and again. The first two meshes(children meshes) start to move away from the parent mesh. Another thing I wanted to ask, if I change the scale/rotation/position of a children mesh, and then do CopyAbsoluteBoneTransforms() will children meshes be positioned properlly(at the proper distance) or would achieving that require more math/methods? Thanks in advance

    Read the article

  • How to optimize a box2d simulation in action game?

    - by nathan
    I'm working on an action game and i use box2d for physics. The game use a tiled map. I have different types of body: Static ones used for tiles Dynamic ones for player and enemies Actually i tested my game with ~150 bodies and i have a 60fps constantly on my computer but not on my mobile (android). The FPS drop as the number of body increase. After having profiled the android application, i saw that the World.step took around 8ms in CPU time to execute. Here are few things to note: Not all the world is visible on screen, i use a scrolling system Enemies are constantly moving toward the player so there is alaways to force applied to their body Enemies need to collide between each others Enemies collide with tiles I also now that i can active/desactive or sleep/awake bodies. Considering the fact that only a part of the enemies are possibly displayed on screen, is there any optimizations i can do to reduce the execution time of box2d simulation? I found a guy trying an optimization based on distance of enemies from the player (link). But i seems like he just desactives far bodies (in my case, i could desactive bodies that are not visible). But my enemies need to move even when they are not visible on screen, and applying forces will not workd on inactive bodies. Should i play with sleeping bodies here? Also, enemies are composed by two fixtures and are constantly colliding with each others and with tiles but i really never need to get notified about that. Is there anything i can do to optimize this kind of scenario? Finally, am i wrong to try to run simulation at 60FPS on mobile and should i try to make it run at 30FPS?

    Read the article

  • Google Chrome with strange behavior

    - by user72274
    I'm former Chromium-browser user, but after not upgrading the PPA for 2 months, I switched to Google Chrome browser yesterday. Everything is okay, except some strange behavior on some pages and crashing after loading "chrome://" configuration pages. The best known website with strange behavior is youtube, there is a picture what I see: When I open user menu in top right corner, it crashes that way and even after closing the menu, some parts of menu stay display. You may say it's Youtube problem, no, I have this problem at least on three other websites, here it is on Imgur: The problem isn't for the whole side, sometimes it happens from the middle of the screen. The interesting part is that it happens everytime in the same distance from the right border. When I check the DOM elements with the Developer tool, the overlay which shows element's position is rendered how it should be. What is more, if there is anchor after the crashed area, it works after clicking on it. Selecting text in crashed page is impossible. I hope there is enough information to give me an advice, thanks in advance. :) EDIT: Here is what the browser posted in "chrome://gpu-internals/": Graphics Feature Status Canvas: Software only, hardware acceleration unavailable Compositing: Hardware accelerated 3D CSS: Hardware accelerated CSS Animation: Software animated. WebGL: Hardware accelerated WebGL multisampling: Hardware accelerated Problems Detected Accelerated CSS animation has been disabled at the command line. Accelerated 2d canvas is unstable in Linux at the moment. Ubuntu 12.04 | Gnome-shell 3.4.1 | ATI Radeon 4550 | Screen resolution 1024*768 | Chrome version 20.0.1132.57 (Official Build 145807)

    Read the article

  • Creating Ideal Customers with Modern Marketing

    - by Richard Lefebvre
    “Without that real-time perspective, it's just not possible to stay in step with what your customers want and need.” — Customer-Obsessed Marketing Is Your Next Competitive Edge Every business talks about focusing on the customer. But few actually deliver. Why? Because digital marketing technology can’t tell a compelling story. It lacks engaging dialogue with no connection beyond the transaction. It’s lost in translation because marketers don’t speak code. And it’s confusing to the customer because marketing and IT can’t connect process and data. Take a look at your digital marketing picture. From a distance it may look fine. But look up close. It’s fragmented and the dots are not connected. You need much higher resolution. Step back and see the big picture. Zoom in on the individual customer. But you’ll need Modern Marketing technology engineered with enterprise grade data management and proven cloud performance. Explore the people, processes, and technology of the Oracle Marketing Cloud. Create a culture of customer obsession. Simplify marketing across all channels to turn casual prospects into passionate advocates. Engage ideal customers with a meaningful experience. Personalize your brand narrative for each customer in every chapter of your story to increase engagement and revenue. Read the full article and watch the videos here

    Read the article

  • How do I retain previously drawn graphics?

    - by Cromanium
    I've created a simple program that draws lines from a fixed point to a random point each frame. I wanted to keep each line on the screen. However, it always seems to be cleared each time it draws on the spriteBatch even without GraphicsDevice.Clear(color) being called. What seems to be the problem? protected override void Draw(GameTime gameTime) { spriteBatch.Begin(); DrawLine(spriteBatch); spriteBatch.End(); base.Draw(gameTime); } private void DrawLine(SpriteBatch spriteBatch) { Random r = new Random(); Vector2 a = new Vector2(50, 100); Vector2 b = new Vector2(r.Next(0, 640), r.Next(0,480)); Texture2D filler= new Texture2D(GraphicsDevice, 1, 1, false, SurfaceFormat.Color); filler.SetData(new[] { Color.Black }); float length = Vector2.Distance(a, b); float angle = (float)Math.Atan2(b.Y - a.Y, b.X - a.X); spriteBatch.Draw(filler, a, null, Color.Black, angle, Vector2.Zero, new Vector2(length,10.0f), SpriteEffects.None, 0f); } What am I doing wrong?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >