Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 265/659 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Modular Open MMO RPG

    - by Chris Valentine
    Has there been an MMORPG type attempt at some kind of open universe where you could host a server on your own if you wish and it would merely be added to the collective of possible places to travel within the MMO? Two types come to mind, a DnD Neverwinter Nights type place or something like EVE online. Where there is a "universe" and each hosted space is a planet or solar system or galaxy and players can travel between them using the same characters/ships/portal system and each new server is than just a new adventure or place to go. I would also assume there were dedicated/replicated servers that housed the characters/inventory themselves so that the environment was decentralized and always expandable. Not sure thats clear but has there been any such attempts or WIP? thanks

    Read the article

  • In GLSL is it possible to offset vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others. This is the C++ code that generates the grid and texture co-orientates which I believe to be correct as the texture is mapped to the grid, hence the white blob in the middle. The grid-lines are generated in the fragment shader, sorry for any confusion. I have tried multiplying the r value of hmColour by 1000 unfortunately that had no effect. The only other problem it could be is that the texture coordinate data is incorrect ? for (int z = 0; z < MAP_Z ; z++) { for(int x = 0; x < MAP_X ; x++) { //Generate Vertex Buffer vertexData[iVertex++] = float (x) * MAP_X; vertexData[iVertex++] = 0; vertexData[iVertex++] = -(float) (z) * MAP_Z; //Colour Buffer NOT NEEDED colourData[iColour++] = 255.0f; // R colourData[iColour++] = 1.0f; // G colourData[iColour++] = 0.0f; // B //Texture Buffer textureData[iTexture++] = (float ) x * (1.0f / MAP_X); textureData[iTexture++] = (float ) z * (1.0f / MAP_Z); } } The heightmap texture I am trying to use appears like so (without grid-lines). This is the corresponding fragment shader // Fragment Shader Code #version 330 uniform sampler2D hmTexture; layout (location=0) out vec4 fragColour; in vec2 fragTex; in vec4 pos; void main(void) { vec2 line = fragTex * 32; // Without Gridlines fragColour = texture(hmTexture,fragTex); // With grid lines // + mix(vec4(0.0, 0.0, 1.0, 0.0), vec4(1.0, 1.0, 1.0, 1.0), // smoothstep(0.05,fract(line.y), 0.99) * smoothstep(0.05,fract(line.x),0.99)); }

    Read the article

  • Objective-c Cocos2d moving a sprite

    - by marcg11
    I hope someone knows how to do the following with cocos2d: I want a sprite to move but not in a single line by using [cocosGuy runAction: [CCMoveTo actionWithDuration:1 position:location]]; What I want is the sprite to do some kind of movements that I preestablish. For example in some point i want the sprirte to move for instance up and then down but in a curve. Do I have to do this with flash like this documents says? http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:animation Does animation in this page means moving sprites or what? thanks

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • Performance of pixel shaders vs. SpriteBatch: XNA

    - by ashes999
    Precondition: I read this question/answer about using shaders, or spritebatch, to render and mark a sprite. I need to do something like that. I also have a 2D lighting PoC which I need to write. The way it will work will basically be something like: Draw all the sprites Draw lighting gradients to create a lighting texture Multiply/add the lighting texture to achieve different effects (I use multiple passes of add/multiply the lighting texture to achieve different effects.) My question is really about a generalization: can I say with certainty that pixel shaders are always faster than adding/multiplying textures to the SpriteBatch? Or that adding/multiplying is always faster? Or if it's not generalizable, how do I decide which approach to take, given that I can probably code either of them? (If it matters, I'm using MonoGame 3.0 beta for Windows games)

    Read the article

  • Raycasting tutorial / vector math question

    - by mattboy
    I'm checking out this nice raycasting tutorial at http://lodev.org/cgtutor/raycasting.html and have a probably very simple math question. In the DDA algorithm I'm having trouble understanding the calcuation of the deltaDistX and deltaDistY variables, which are the distances that the ray has to travel from 1 x-side to the next x-side, or from 1 y-side to the next y-side, in the square grid that makes up the world map (see below screenshot). In the tutorial they are calculated as follows, but without much explanation: //length of ray from one x or y-side to next x or y-side double deltaDistX = sqrt(1 + (rayDirY * rayDirY) / (rayDirX * rayDirX)); double deltaDistY = sqrt(1 + (rayDirX * rayDirX) / (rayDirY * rayDirY)); rayDirY and rayDirX are the direction of a ray that has been cast. How do you get these formulas? It looks like pythagorean theorem is part of it, but somehow there's division involved here. Can anyone clue me in as to what mathematical knowledge I'm missing here, or "prove" the formula by showing how it's derived?

    Read the article

  • GLSL Atmospheric Scattering Issue

    - by mtf1200
    I am attempting to use Sean O'Neil's shaders to accomplish atmospheric scattering. For now I am just using SkyFromSpace and GroundFromSpace. The atmosphere works fine but the planet itself is just a giant dark sphere with a white blotch that follows the camera. I think the problem might rest in the "v3Attenuation" variable as when this is removed the sphere is show (albeit without scattering). Here is the vertex shader. Thanks for the time! uniform mat4 g_WorldViewProjectionMatrix; uniform mat4 g_WorldMatrix; uniform vec3 m_v3CameraPos; // The camera's current position uniform vec3 m_v3LightPos; // The direction vector to the light source uniform vec3 m_v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels uniform float m_fCameraHeight; // The camera's current height uniform float m_fCameraHeight2; // fCameraHeight^2 uniform float m_fOuterRadius; // The outer (atmosphere) radius uniform float m_fOuterRadius2; // fOuterRadius^2 uniform float m_fInnerRadius; // The inner (planetary) radius uniform float m_fInnerRadius2; // fInnerRadius^2 uniform float m_fKrESun; // Kr * ESun uniform float m_fKmESun; // Km * ESun uniform float m_fKr4PI; // Kr * 4 * PI uniform float m_fKm4PI; // Km * 4 * PI uniform float m_fScale; // 1 / (fOuterRadius - fInnerRadius) uniform float m_fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) uniform float m_fScaleOverScaleDepth; // fScale / fScaleDepth attribute vec4 inPosition; vec3 v3ELightPos = vec3(g_WorldMatrix * vec4(m_v3LightPos, 1.0)); vec3 v3ECameraPos= vec3(g_WorldMatrix * vec4(m_v3CameraPos, 1.0)); const int nSamples = 2; const float fSamples = 2.0; varying vec4 color; float scale(float fCos) { float x = 1.0 - fCos; return m_fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { gl_Position = g_WorldViewProjectionMatrix * inPosition; // Get the ray from the camera to the vertex and its length (which is the far point of the ray passing through the atmosphere) vec3 v3Pos = vec3(g_WorldMatrix * inPosition); vec3 v3Ray = v3Pos - v3ECameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the closest intersection of the ray with the outer atmosphere (which is the near point of the ray passing through the atmosphere) float B = 2.0 * dot(m_v3CameraPos, v3Ray); float C = m_fCameraHeight2 - m_fOuterRadius2; float fDet = max(0.0, B*B - 4.0 * C); float fNear = 0.5 * (-B - sqrt(fDet)); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = m_v3CameraPos + v3Ray * fNear; fFar -= fNear; float fDepth = exp((m_fInnerRadius - m_fOuterRadius) / m_fScaleDepth); float fCameraAngle = dot(-v3Ray, v3Pos) / fFar; float fLightAngle = dot(v3ELightPos, v3Pos) / fFar; float fCameraScale = scale(fCameraAngle); float fLightScale = scale(fLightAngle); float fCameraOffset = fDepth*fCameraScale; float fTemp = (fLightScale + fCameraScale); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * m_fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Attenuate; for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(m_fScaleOverScaleDepth * (m_fInnerRadius - fHeight)); float fScatter = fDepth*fTemp - fCameraOffset; v3Attenuate = exp(-fScatter * (m_v3InvWavelength * m_fKr4PI + m_fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } vec3 first = v3FrontColor * (m_v3InvWavelength * m_fKrESun + m_fKmESun); vec3 secondary = v3Attenuate; color = vec4((first + vec3(0.25,0.25,0.25) * secondary), 1.0); // ^^ that color is passed to the frag shader and is used as the gl_FragColor } Here is also an image of the problem image

    Read the article

  • Glenn Fiedler's fixed timestep with fake threads

    - by kaoD
    I've implemented Glenn Fiedler's Fix Your Timestep! quite a few times in single-threaded games. Now I'm facing a different situation: I'm trying to do this in JavaScript. I know JS is single-threaded, but I plan on using requestAnimationFrame for the rendering part. This leaves me with two independent fake threads: simulation and rendering (I suppose requestAnimationFrame isn't really threaded, is it? I don't think so, it would BREAK JS.) Timing in these threads is independent too: dt for simulation and render is not the same. If I'm not mistaken, simulation should be up to Fiedler's while loop end. After the while loop, accumulator < dt so I'm left with some unspent time (dt) in the simulation thread. The problem comes in the draw/interpolation phase: const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); In my render callback, I have the current timestamp to which I can subtract the last-simulated-in-physics-timestamp to have a dt for the current frame. Should I just forget about this dt and draw using the physics thread's dt? It seems weird, since, well, I want to interpolate for the unspent time between simulation and render too, right? Of course, I want simulation and rendering to be completely independent, but I can't get around the fact that in Glenn's implementation the renderer produces time and the simulation consumes it in discrete dt sized chunks. A similar question was asked in Semi Fixed-timestep ported to javascript but the question doesn't really get to the point, and answers there point to removing physics from the render thread (which is what I'm trying to do) or just keeping physics in the render callback too (which is what I'm trying to avoid.)

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • How to build a turn-based multiplayer "real time" server

    - by jmosesman
    I want to build a TCG for mobile devices that is multiplayer over the web (not local wifi or bluetooth). As a player plays cards I want the second player to see what is being played in "real time" (within a few seconds). Only one player can play at a time. Server requirements: 1) Continuously listens for input from Player 1 2) As it receives input from Player 1, sends the message to Player 2 I know some PHP, but it seems like unless I had a loop that continued until I broke it (seems like a bad idea) the script would just receive one input and quit. On the mobile side I know I can open sockets using various frameworks, but what language allows a "stream-like" behavior that continuously listens/sends messages on the server? Or if I'm missing something, what would be the best practice here?

    Read the article

  • Exporting .FBX model into XNA - unorthogonal bones

    - by Sweta Dwivedi
    I create a butterfly model in 3ds max with some basic animation, however trying to export it to .FBX format I get the following exception.. any idea how i can transform the wings to be orthogonal.. One or more objects in the scene has local axes that are not perpendicular to each other (non-orthogonal). The FBX plug-in only supports orthogonal (or perpendicular) axes and will not correctly import or export any transformations that involve non-perpendicular local axes. This can create an inaccurate appearance with the affected objects: -Right.Wing I have attached a picture for reference . . .

    Read the article

  • Compiling Quake 3 in Snow Leopard

    - by Xap87
    First of all I have Xcode 4 installed in Mac OS X Snow Leopard 10.6.8. I have downloaded the Quake 3 source code 1.32b release but I can't open the Xcode project that is inside the /macosx folder since it is in the old .pbproj format and therefore it throws an "incompatible version" error. Has anyone been able to convert this to a Xcode format or is there any other way to compile the source code in Mac OS X Snow Leopard? Thanks

    Read the article

  • Box2D Static-Dynamic body joint eliminates collisions

    - by andrewz
    I have a static body A, and a dynamic body B, and a dynamic body C. A is filtered to not collide with anything, B and C collide with each other. I wish to create a joint between B and A. When I create a joint (ex. revolute), B no longer collides with C - C passes through it as if it does not exist. What am I doing wrong? How can adding a joint prevent a body from colliding with another body it used to? EDIT: I want to join B with A, and have B collide with C, but not A collide with C. In realistic terms, I'm trying to create a revolute joint between a wheel (B) and a wall (A), and have a box (C) hit the wheel and the wheel would then rotate. EDIT: I create a the simplest revolute joint I can with these parameters (C++): b2RevoluteJointDef def; def.Initialize(A, B, B -> GetWorldCenter()); world -> CreateJoint(&def);

    Read the article

  • OpenGL and atlas

    - by user30088
    I'm trying to draw element from a texture atlas with OpenGL ES 2. Currently, I'm drawing my elements using something like that in the shader: uniform mat4 uCamera; uniform mat4 uModel; attribute vec4 aPosition; attribute vec4 aColor; attribute vec2 aTextCoord; uniform vec2 offset; uniform vec2 scale; varying lowp vec4 vColor; varying lowp vec2 vUV; void main() { vUV = offset + aTextCoord * scale; gl_Position = (uCamera * uModel) * aPosition; vColor = aColor; } For each elements to draw I send his offset and scale to the shader. The problem with this method: I can't rotate the element but it's not a problem for now. I would like to know, what is better for performance: Send uniforms like that for each element on every frames Update quad geometry (uvs) for each element Thanks!

    Read the article

  • How do I implement camera axis aligned billboards?

    - by user19787
    I am trying to make an axis-aligned billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the up, right, and look vectors. So far this is what I have: target = cam.pos look = norm(target - billboard.pos) right = norm(Vector3(0,1,0) * look) up = look * right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong?

    Read the article

  • Visualize flowchart diagram with multiple end symbols

    - by platzhirsch
    I am looking for a standardize way to visualize the following hierarchical logic: The state of the thread is represented by the answers to the hierarchical set of question You can read this listing like a flowchart, you iterate over the questions decide, go one step deeper and so on. Therefore I thought the best way to visualize it, using a flowchart. The problem is, in this hierarchical set it is possible to end in more than one state and its totally valid. I have never seen a flowchart where you can enter more than one state. Is it still possible and I am missing the right symbol to present this logic or are flowchart not fitting anyway? What other graphical representation could I use, is there something fitting in UML? A non-deterministic state machine seems not to be intuitive enough, transfering it into a deterministic state machine would result in to many states, and so on.

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

  • Geometry instancing in OpenGL ES 2.0

    - by seahorse
    I am planning to do geometry instancing in OpenGL ES 2.0 Basically I plan to render the same geometry(a chair) maybe 1000 times in my scene. What is the best way to do this in OpenGL ES 2.0? I am considering passing model view mat4 as an attribute. Since attributes are per vertex data do I need to pass this same mat4, three times for each vertex of the same triangle(since modelview remains constant across vertices of the triangle). That would amount to a lot of extra data sent to the GPU( 2 extra vertices*16 floats*(Number of triangles) amount of extra data). Or should I be sending the mat4 only once per triangle?But how is that possible using attributes since attributes are defined as "per vertex" data? What is the best and efficient way to do instancing in OpenGL ES 2.0?

    Read the article

  • Do I need path finding to make AI avoid obstacles?

    - by yannicuLar
    How do you know when a path-finding algorithm is really needed? There are contexts, where you just want to improve AI navigation to avoid an object, like a space -ship that won't crash on a planet or a car that already knows where to steer, but needs small corrections to avoid a road bump. As I've seen on similar posts, the obvious solution is to implement some path-finding algorithm, most likely like A*, and let your AI-controlled object to navigate through the path. Now, I have the necessary skills to implement a path-finding algorithm, and I'm not being lazy here, but I'm still a bit skeptical on if this is really needed. I have the impression that path-finding is appropriate to navigate through a maze, or picking a path when there are many alternatives. But in obstacle avoidance, when you do know the path, but need to make slight corrections, is path finding really necessary? Even when the obstacles are too sparse or small ? I mean, in real life, when you're driving and notice a bump on the road, you will just have to pick between steering a bit on the left (and have the bump on your right side) or the other way around. You will not consider stopping, or going backwards. A path finding would be appropriate when you need to pick a route through the city, right ? So, are there any other methods to help AI navigation, except path-finding? And if there are, how do you know when path-fining is the appropriate algorithm ? Thanks for any thoughts

    Read the article

  • Flex 4 + Apache Ant, Cannot Load FlashPunk Libraries

    - by SquareCrow
    I have been searching google, Apache Docs*, and FlashPunk forums looking for an answer to this: I cannot get Ant/Flex to find and compile the FlashPunk libraries. Here is my build.xml. [code] <!-- Fetch the JAR full of Flex tasks if it is not already in the source directory --> <copy file="${FLEX_HOME}/ant/lib/flexTasks.jar" todir="${SOURCE_PATH}"/> <!-- Add flextasks to the project --> <taskdef resource="flexTasks.tasks" classpath="${SOURCE_PATH}/flexTasks.jar"></taskdef> <!-- Release build Flash Player 10.1 --> <target name="build"> <!-- Build the FlashPunk library --> <echo message="building swc..." /> <compc output="FlashPunk.swc" keep-generated-actionscript="false" incremental="false" optimize="false" debug="true" use-network="false"> <include-sources dir="${FLASHPUNK_PATH}/net" includes="**/* flashpunk/utils/* flashpunk/masks/*" excludes="**/*.TTF **/*.png"/> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml"/> </compc> <echo message="building swf..." /> <mxmlc file="${SOURCE_PATH}/epOne.as" output="${OUTPUT_PATH}/epOne.swf" debug="false" incremental="false" strict="true" accessible="false" link-report="link_report.xml" static-link-runtime-shared-libraries="true"> <optimize>true</optimize> </mxmlc> </target> [/code] Results in many errors of the type "Definition net.flashpunk.masks:Grid could not be found" even though when I open the directories I can see the *.AS files right there. Sorry if this is very basic. I am piecing together knowledge of Ant from docs and tutorials. *I decided to use Ant because neither FlashDevelop for Windows nor Eclipse for Linux seemto work for me.

    Read the article

  • Exporting SWF With Transparent Background For Scaleform/UDK

    - by Alex Shepard
    After looking all over in the UDN and forums I have yet to find a solution for this: I am currently using Flash CS3 and Actionscript 2.0 to build my scaleform menus and I can use them in the UDK. For various reasons I can't use the handy plugin Autodesk supplies to enable this export so I publish my flash documents to swf the old fassioned way and manually use the gfxexport.exe tool to get my .gfx file. I can then import into the UDK the normal way. My problem is that the flash movies that I import will not alpha blend even if the material is set to blend in the alpha channel of the target render texture. My project images are set up to export properly. My classpath for Actionscript 2.0 is set to the correct location. My HTML publish settings have window mode set to Transparent Windowless. Is it possible to export without the scaleform flash extension and still get the desired effects and if so how might I do so? Am I merely missing something from my project setup?

    Read the article

  • How can I store spell & items using a std::vector implementation?

    - by Vladimir Marenus
    I'm following along with a book from GameInstitute right now, and it's asking me to: Allow the player to buy and carry healing potions and potions of fireball. You can add an Item array (after you define the item class) to the Player class for storing them, or use a std::vector to store them. I think I would like to use the std::vector implementation, because that seems to confuse me less than making an item class, but I am unsure how to do so. I've heard from many people that vectors are great ways to store dynamic values (such as items, weapons, etc), but I've not seen it used.

    Read the article

  • openGL migration from SFML to glut, vertices arrays or display lists are not displayed

    - by user3714670
    Due to using quad buffered stereo 3D (which i have not included yet), i need to migrate my openGL program from a SFML window to a glut window. With SFML my vertices and display list were properly displayed, now with glut my window is blank white (or another color depending on the way i clear it). Here is the code to initialise the window : int type; int stereoMode = 0; if ( stereoMode == 0 ) type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH; else type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_STEREO; glutInitDisplayMode(type); int argc = 0; char *argv = ""; glewExperimental = GL_TRUE; glutInit(&argc, &argv); bool fullscreen = false; glutInitWindowSize(width,height); int win = glutCreateWindow(title.c_str()); glutSetWindow(win); assert(win != 0); if ( fullscreen ) { glutFullScreen(); width = glutGet(GLUT_SCREEN_WIDTH); height = glutGet(GLUT_SCREEN_HEIGHT); } GLenum err = glewInit(); if (GLEW_OK != err) { fprintf(stderr, "Error: %s\n", glewGetErrorString(err)); } glutDisplayFunc(loop_function); This is the only code i had to change for now, but here is the code i used with sfml and displayed my objects in the loop, if i change the value of glClearColor, the window's background does change color : glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glClearColor(255.0f, 255.0f, 255.0f, 0.0f); glLoadIdentity(); sf::Time elapsed_time = clock.getElapsedTime(); clock.restart(); camera->animate(elapsed_time.asMilliseconds()); camera->look(); for (auto i = objects->cbegin(); i != objects->cend(); ++i) (*i)->draw(camera); glutSwapBuffers(); Is there any other changes i should have done switching to glut ? that would be great if someone could enlighten me on the subject. In addition to that, i found out that adding too many objects (that were well handled before with SFML), openGL gives error 1285: out of memory. Maybe this is related. EDIT : Here is the code i use to draw each object, maybe it is the problem : GLuint LightID = glGetUniformLocation(this->shaderProgram, "LightPosition_worldspace"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialAmbientID = glGetUniformLocation(this->shaderProgram, "MaterialAmbient"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialSpecularID = glGetUniformLocation(this->shaderProgram, "MaterialSpecular"); if(LightID ==-1) cout << "LightID not found ..." << endl; glm::vec3 lightPos = glm::vec3(0,150,150); glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z); glUniform3f(MaterialAmbientID, MaterialAmbient.x, MaterialAmbient.y, MaterialAmbient.z); glUniform3f(MaterialSpecularID, MaterialSpecular.x, MaterialSpecular.y, MaterialSpecular.z); // Get a handle for our "myTextureSampler" uniform GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; glActiveTexture(GL_TEXTURE0); sf::Texture::bind(texture); glUniform1i(TextureID, 0); // 2nd attribute buffer : UVs GLuint vertexUVID = glGetAttribLocation(shaderProgram, "color"); if(vertexUVID==-1) cout << "vertexUVID not found ..." << endl; glEnableVertexAttribArray(vertexUVID); glBindBuffer(GL_ARRAY_BUFFER, color_array_buffer); glVertexAttribPointer(vertexUVID, 2, GL_FLOAT, GL_FALSE, 0, 0); GLuint vertexNormal_modelspaceID = glGetAttribLocation(shaderProgram, "normal"); if(!vertexNormal_modelspaceID) cout << "vertexNormal_modelspaceID not found ..." << endl; glEnableVertexAttribArray(vertexNormal_modelspaceID); glBindBuffer(GL_ARRAY_BUFFER, normal_array_buffer); glVertexAttribPointer(vertexNormal_modelspaceID, 3, GL_FLOAT, GL_FALSE, 0, 0 ); GLint posAttrib; posAttrib = glGetAttribLocation(shaderProgram, "position"); if(!posAttrib) cout << "posAttrib not found ..." << endl; glEnableVertexAttribArray(posAttrib); glBindBuffer(GL_ARRAY_BUFFER, position_array_buffer); glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elements_array_buffer); glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0); GLuint error; while ((error = glGetError()) != GL_NO_ERROR) { cerr << "OpenGL error: " << error << endl; } disableShaders();

    Read the article

  • Loading class instance from XML with Texture2D

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public class Weapon { public int weaponStrength; public int damageModifiers; public int speed; public int magicDefense; public string description; public string identifier; public Texture2D weaponTexture; } public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >