Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 392/1027 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Image with FadeIn effect blinks when added to scene

    - by Ef Es
    I am trying to add an image to the scene, but it should just be added to the scene invisible, FadeIn and then be deleted when the effect finishes. My problem is that the images blink once when they are added to the scene, then they do the intended effect. My best guess is that when they are added they show on the scene for a split second before starting the animation. I though of making them invisible for a split second before activating them, but I am not sure how to code it. const bool Sunbeams::add() { const CCSize kSceenSize = CCDirector::sharedDirector()->getWinSize(); const int nRayType = random( m_kRays.size()); const CCPoint kPosition( random( static_cast < int >( kSceenSize.width)), 0.0f); const float fDuration = random( m_fDurationVariance) + m_fDurationMin; CCSprite* pkLightBeam = CCSprite::spriteWithTexture( m_kRays[nRayType]); if ( !pkLightBeam) { msg::debug( "Sunbeams::add", "Failed to create sprite from ray '%d'!\n", m_kRays[nRayType]); return false; } pkLightBeam->setAnchorPoint( CCPointZero); pkLightBeam->setPosition( kPosition); m_kActiveBeams.push_back( pkLightBeam); CCDirector::sharedDirector()->getRunningScene()->addChild( pkLightBeam); CCActionInterval* pkAction = CCFadeIn::actionWithDuration( fDuration); CCActionInterval* pkActionBack = pkAction->reverse(); pkLightBeam->runAction( CCSequence::actions( pkAction, pkActionBack, 0)); return true; }

    Read the article

  • Migration from XNA to SharpDX

    - by Wouter
    My fear is that XNA has reached the end of the road. To keep up with the latest technology a shift to another game framework might be needed. We have many games in a large codebase, all based on XNA. My question is, how much work would it be to migrate to SharpDX and are there other possibilities? Our code base mainly uses basic 3D rendering and the SpriteBatch, no fancy shader stuff. Update: I should have mentioned we only use 2.5D, we have a simple engine that builds textured quads to render text and animated sprites. Also for sound we use XACT (what else..) with some effects.

    Read the article

  • Detecting long held keys on keyboard

    - by Robinson Joaquin
    I just want to ask if can I check for "KEY"(keyboard) that is HOLD/PRESSED for a long time, because I am to create a clone of breakout with air hockey for 2 different human players. Here's the list of my concern: Do I need other/ 3rd party library for KEY HOLDS? Is multi-threading needed? I don't know anything about this multi-threading stuff and I don't think about using one(I'm just a NEWBIE). One more thing, what if the two players pressed their respective key at the same time, how can I program to avoid error or worse one player's key is prioritized first before the the key of the other. example: Player 1 = W for UP & S for DOWN Player 2 = O for UP & L for DOWN (example: W & L is pressed at the same time) PS: I use GLUT for the visuals of the game.

    Read the article

  • Django and Google App Engine Helper not finding the ipaddr module.

    - by Phil
    I'm trying to get Django running on GAE using this tutorial. When I run python manage.py runserver I get the stacktrace below. I'm new to both django and python so I don't know what my next steps are (This is Ubuntu Jaunty btw). It seems django isn't finding the GAE module ipaddr which comes with SDK 1.3.1. How do I get django to find this module? /home/username/bin/google_appengine/google/appengine/api/datastore_file_stub.py:40: DeprecationWarning: the md5 module is deprecated; use hashlib instead import md5 /home/username/bin/google_appengine/google/appengine/api/memcache/__init__.py:31: DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha Traceback (most recent call last): File "manage.py", line 18, in <module> InstallAppengineHelperForDjango() File "/home/username/Development/GAE/myapp/appengine_django/__init__.py", line 543, in InstallAppengineHelperForDjango InstallDjangoModuleReplacements() File "/home/username/Development/GAE/myapp/appengine_django/__init__.py", line 260, in InstallDjangoModuleReplacements import django.db File "/home/username/Development/GAE/myapp/django/db/__init__.py", line 57, in <module> 'TIME_ZONE': settings.TIME_ZONE, File "/home/username/Development/GAE/myapp/appengine_django/db/base.py", line 117, in __init__ self._setup_stubs() File "/home/username/Development/GAE/myapp/appengine_django/db/base.py", line 128, in _setup_stubs from google.appengine.tools import dev_appserver_main File "/home/username/bin/google_appengine/google/appengine/tools/dev_appserver_main.py", line 82, in <module> from google.appengine.tools import appcfg File "/home/username/bin/google_appengine/google/appengine/tools/appcfg.py", line 53, in <module> from google.appengine.api import dosinfo File "/home/username/bin/google_appengine/google/appengine/api/dosinfo.py", line 25, in <module> import ipaddr ImportError: No module named ipaddr

    Read the article

  • Best practices for periodically saving game state to disk

    - by Ben Morris
    I'm working on an MMO. All of the player and environment data lives on a server and is kept in memory. There's a "world" object which keeps track of all of the maps, characters, etc. and their relations to each other. To avoid data loss in case of a crash, I've been periodically serializing the world to disk. The trouble is, this object can be quite large, so when the server starts writing, there's noticeable in-game slowdown for a few seconds, which I'd like to avoid. Any pointers on how to go about this in a more efficient way?

    Read the article

  • Draw a never-ending line in XNA

    - by user2236165
    I am drawing a line in XNA which I want to never end. I also have a tool that moves forward in X-direction and a camera which is centered at this tool. However, when I reach the end of the viewport the lines are not drawn anymore. Here are some pictures to illustrate my problem: At the start the line goes across the whole screen, but as my tool moves forward, we reach the end of the line. Here are the method which draws the lines: private void DrawEvenlySpacedSprites (Texture2D texture, Vector2 point1, Vector2 point2, float increment) { var distance = Vector2.Distance (point1, point2); // the distance between two points var iterations = (int)(distance / increment); // how many sprites with be drawn var normalizedIncrement = 1.0f / iterations; // the Lerp method needs values between 0.0 and 1.0 var amount = 0.0f; if (iterations == 0) iterations = 1; for (int i = 0; i < iterations; i++) { var drawPoint = Vector2.Lerp (point1, point2, amount); spriteBatch.Draw (texture, drawPoint, Color.White); amount += normalizedIncrement; } } Here are the draw method in Game. The dots are my lines: protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.Black); nyVector = nextVector (gammelVector); GraphicsDevice.SetRenderTarget (renderTarget); spriteBatch.Begin (); DrawEvenlySpacedSprites (dot, gammelVector, nyVector, 0.9F); spriteBatch.End (); GraphicsDevice.SetRenderTarget (null); spriteBatch.Begin (SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, null, camera.transform); spriteBatch.Draw (renderTarget, new Vector2 (), Color.White); spriteBatch.Draw (tool, new Vector2(toolPos.X - (tool.Width/2), toolPos.Y - (tool.Height/2)), Color.White); spriteBatch.End (); gammelVector = new Vector2 (nyVector.X, nyVector.Y); base.Draw (gameTime); } Here's the next vector-method, It just finds me a new point where the line should be drawn with a new X-coordinate between 100 and 200 pixels and a random Y-coordinate between the old vector Y-coordinate and the height of the viewport: Vector2 nextVector (Vector2 vector) { return new Vector2 (vector.X + r.Next(100, 200), r.Next ((int)(vector.Y - 100), viewport.Height)); } Can anyone point me in the right direction here? I'm guessing it has to do with the viewport.width, but I'm not quite sure how to solve it. Thank you for reading!

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • How do you turn a cube into a sphere?

    - by Tom Dalling
    I'm trying to make a quad sphere based on an article, which shows results like this: I can generate a cube correctly: But when I convert all the points according to this formula (from the page linked above): x = x * sqrtf(1.0 - (y*y/2.0) - (z*z/2.0) + (y*y*z*z/3.0)); y = y * sqrtf(1.0 - (z*z/2.0) - (x*x/2.0) + (z*z*x*x/3.0)); z = z * sqrtf(1.0 - (x*x/2.0) - (y*y/2.0) + (x*x*y*y/3.0)); My sphere looks like this: As you can see, the edges of the cube still poke out too far. The cube ranges from -1 to +1 on all axes, like the article says. Any ideas what is wrong?

    Read the article

  • How do I efficiently code both the client and server at the same time?

    - by liamzebedee
    I'm coding my game using a client-server model. When playing on singleplayer, the game starts a local server, and interacts with it just like a remote server (multiplayer). I have done this to avoid coding separate singleplayer and multiplayer code. I have just started coding and have encountered a major problem. Currently I'm developing the game in Eclipse, having all the game classes organized into packages. Then, in my server code, I just use all the classes in the client packages. The problem is, these client classes have variables that are specific to rendering, which obviously wouldn't be performed on a server. Should I create modified versions of the client classes to use in the server? Or should I just modify the client classes with a boolean, to indicate if its the client/server using it. Are there any other options I have? I just had a thought about maybe using the server class as the core class, then extending it with rendering stuff?

    Read the article

  • LWJGL glRotatef() without rotating axes?

    - by Brandon oubiub
    Okay so, I noticed when you rotate around an axis, say you do this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); That will rotate things 90 degrees around the x-axis. However, it also sort of rotates the y and z axes as well. So now the y-axis is pointing in and out of the screen, instead of up and down. So when I try to do stuff like this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); glRotatef(whatever, 0.0f, 1.0f, 0.0f); glRotatef(whatever2, 0.0f, 0.0f, 1.0f); The rotations around the y and z-axes end up not how I want them. I was wondering if there is any way I can sort of rotate just the axes back to their initial position after using glRotatef(), without rotating the object back. Or something like that, just so that when I rotate around the y-axis, it rotates around a vertical axis.

    Read the article

  • How to run the pixel shader effcet??

    - by Yashwinder
    Below stated is the code for my pixel shader which I am rendering after the vertex shader. I have set the wordViewProjection matrix in my program but I don't know to set the progress variable i.e in my pixel shader file which will make the image displayed by the help of a quad to give out transition effect. Here is the code for my pixel shader program::: As my pixel shader is giving a static effect and now I want to use it to give some effect. So for this I have to add a progress variable in my pixel shader and initialize to the Constant table function i.e constantTable.SetValue(D3DDevice,"progress",progress ); I am having the problem in using this function for progress in my program. Anybody know how to set this variable in my program. And my new pixel Shader code is float progress : register(C0); sampler2D implicitInput : register(s0); sampler2D oldInput : register(s1); struct VS_OUTPUT { float4 Position : POSITION; float4 Color : COLOR0; float2 UV : TEXCOORD 0; }; float4 Blinds(float2 uv) { if(frac(uv.y * 5) < progress) { return tex2D(implicitInput, uv); } else { return tex2D(oldInput, uv); } } // Pixel Shader { return Blinds(input.UV); }

    Read the article

  • Should I continue reading Frank Luna's Introduction to 3D Game Programming with DirectX 11 book after D3DX and XNA Math Library have been deprecated? [on hold]

    - by milindsrivastava1997
    I recently started learning DirectX 11 (C++) by reading Frank Luna's Introduction to 3D Game Programming with DirectX 11. In that the author uses D3DX and XNA Math Library. Since they have been deprecated should I continue using that book? If yes, should I use the deprecated libraries or should I switch some other libraries? If no, which book should I consult for up-to-date content with no use of deprecated library? Thanks!

    Read the article

  • Should I use procedural animation?

    - by user712092
    I have started to make a fantasy 3d fps swordplay game and I want to add animations. I don't want to animate everything by hand because it would take a lot of time, so I decided to use procedural animation. I would certainly use IK (starting with simple reaching an object with hand ...). I also assume procedural generation of animations will make less animations to do by hand (I can blend animations ...). I want also to have a planner for animation which would simplify complex animations; those which can be split to a sequence - run and then jump, jump and then roll - or which are separable - legs running and torso swinging with sword -. I want for example a character to chop a head of a big troll. If troll crouches character would just chop his head off, if it is standing he would climb on a troll. I know that I would have to describe the state ("troll is low", "troll is high", "chop troll head" ..) which would imply what regions animation will be in (if there is a gap between them character would jump), which would imply what places character can have some of legs and hands or would choose an predefined animation. My main goal is simplicity of coding, but I want my game to be looking cool also. Is it worthy to use procedural animation or does it make more troubles that it solves? (there can be lot of twiddling ...) I am using Blender Game Engine (therefore Python for scripting, and Bullet Physics).

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • How do I choose the scaling factor of a 3D game world?

    - by concept3d
    I am making a 3D tank game prototype with some physics simulation, am using C++. One of the decisions I need to make is the scale of the game world in relation to reality. For example, I could consider 1 in-game unit of measurement to correspond to 1 meter in reality. This feels intuitive, but I feel like I might be missing something. I can think of the following as potential problems: 3D modelling program compatibility. (?) Numerical accuracy. (Does this matter?) Especially at large scales, how games like Battlefield have huge maps: How don't they lose numerical accuracy if they use 1:1 mapping with real world scale, since floating point representation tend to lose more precision with larger numbers (e.g. with ray casting, physics simulation)? Gameplay. I don't want the movement of units to feel slow or fast while using almost real world values like -9.8 m/s^2 for gravity. (This might be subjective.) Is it ok to scale up/down imported assets or it's best fit with a world with its original scale? Rendering performance. Are large meshes with the same vertex count slower to render? I'm wondering if I should split this into multiple questions...

    Read the article

  • Why is Spritebatch drawing my Textures out of order?

    - by Andrew
    I just started working with XNA Studio after programming 2D games in java. Because of this, I have absolutely no experience with Spritebatch and sprite sorting. In java, I could just layer the images by calling the draw methods in order. For a while, my Spritebatch was working fine in deferred sorting mode, but when I made a change to one of my textures, it suddenly started drawing them out of order. I have searched for a solution to this problem, but nothing seems to work. I have tried adding layer depths to the sprites and changing the sort mode to BackToFront or FrontToBack or even immediate, but nothing seems to work. Here is my drawing code: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Gray); Game1.spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, null, null); for (int x = 0; x < 5; x++) { for (int y = 0; y < 5; y++) { region[x, y].draw(((float)w / aw)); // Draws the Tile-Based background } } player.draw(spriteBatch, ((float)w / aw));//draws the character (This method is where the problem occurs) enemy.draw(spriteBatch, (float)w/aw); // draws a basic enemy Game1.spriteBatch.End(); base.Draw(gameTime); } player.draw method: public void draw(SpriteBatch sb, float ratio){ //draws the player base (The character without hair or equipment) sb.Draw(playerbase[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White,0,Vector2.Zero,SpriteEffects.None,0); //draws the player's hair sb.Draw(playerbase[3], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shirt sb.Draw(equipment[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's pants sb.Draw(equipment[1], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shoes sb.Draw(equipment[2], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); } the game has a top-down perspective much like the early legend of zelda games. It draws sections of the texture depending on which direction the character is facing and the animation frame. However, instead of drawing the character in the order the draw methods are called, it ends up drawing the character out of order. Please help me with this problem.

    Read the article

  • First Person Camera strafing at angle

    - by Linkandzelda
    I have a simple camera class working in directx 11 allowing moving forward and rotating left and right. I'm trying to implement strafing into it but having some problems. The strafing works when there's no camera rotation, so when the camera starts at 0, 0, 0. But after rotating the camera in either direction it seems to strafe at an angle or inverted or just some odd stuff. Here is a video uploaded to Dropbox showing this behavior. https://dl.dropboxusercontent.com/u/2873587/IncorrectStrafing.mp4 And here is my camera class. I have a hunch that it's related to the calculation for camera position. I tried various different calculations in strafe and they all seem to follow the same pattern and same behavior. Also the m_camera_rotation represents the Y rotation, as pitching isn't implemented yet. #include "camera.h" camera::camera(float x, float y, float z, float initial_rotation) { m_x = x; m_y = y; m_z = z; m_camera_rotation = initial_rotation; updateDXZ(); } camera::~camera(void) { } void camera::updateDXZ() { m_dx = sin(m_camera_rotation * (XM_PI/180.0)); m_dz = cos(m_camera_rotation * (XM_PI/180.0)); } void camera::Rotate(float amount) { m_camera_rotation += amount; updateDXZ(); } void camera::Forward(float step) { m_x += step * m_dx; m_z += step * m_dz; } void camera::strafe(float amount) { float yaw = (XM_PI/180.0) * m_camera_rotation; m_x += cosf( yaw ) * amount; m_z += sinf( yaw ) * amount; } XMMATRIX camera::getViewMatrix() { updatePosition(); return XMMatrixLookAtLH(m_position, m_lookat, m_up); } void camera::updatePosition() { m_position = XMVectorSet(m_x, m_y, m_z, 0.0); m_lookat = XMVectorSet(m_x + m_dx, m_y, m_z + m_dz, 0.0); m_up = XMVectorSet(0.0, 1.0, 0.0, 0.0); }

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • Collision Resolution

    - by ultifinitus
    Hey all, I'm making a simple side-scrolling game, and I would appreciate some input! My collision detection system is a simple bounding box detection, so it's really easy to implement. However my collision resolution is ridiculous! Currently I have a little formula like this: if (colliding(firstObject,secondObject)) firstObject.resolve_collision(yAxisOffset); if (colliding(firstObject,secondObject)) firstObject.resolve_collision(xAxisOffset); where yAxisOffset is only set if the first object's previous y position was outside the second object's collision frame, respectively xAxisOffset as well. Now this is working great, in general. However there is a single problem. When I have a stack of objects and I push the first object against that stack, the first object get's "stuck," on the stack. What's I think is happening is the object's collision system checks and resolves for collisions based on creation time, so If I check one axis, then the other, the object will "sink" object directly along the checking axis. This sinking action causes the collision detection routine to think there's a gap between our position and the other object's position, and when I finally check the object that I've already sunk into, my object's position is resolved to it's original position... All this is great, and I'm sure if I bang my head against a wall long enough i'll come up with a working algorithm, but I'd rather not =). So what in the heck do you think I should do? How could I change my collision resolution system to fix this? Here's the program (temporary link, not sure how long it'll last) (notes: arrow keys to navigate, click to drop block, x to jump) I'd appreciate any help you can offer!

    Read the article

  • Simulating an object floating on water

    - by Aaron M
    I'm working on a top down fishing game. I want to implement some physics and collision detection regarding the boat moving around the lake. I would like for be able to implement thrust from either the main motor or trolling motor, the effect of wind on the object, and the drag of the water on the object. I've been looking at the farseer physics engine, but not having any experience using a physics engine, I am not quite sure that farseer is suitable for this type of thing(Most of the demos seem to be the application of gravity to a vertical top/down type model). Would the farseer engine be suitable? or would a different engine be more suitable?

    Read the article

  • What algorithm to use to fill a KenKen square board with cages?

    - by JimmyBoh
    I am working on recreating KenKen, a popular math puzzle involving a blank grid that is divided into "cages". Each cage is just a collection of adjacent squares and has a clue which is generally a number and an operand, shown below: What type of algorithm would be best to fill the square with cages? Assume the maximum number of cells per cage would be 3 and the board is 4x4 in size, like in the example above.

    Read the article

  • Game Center: Leaderboard score inconsistencies

    - by Hasyimi Bahrudin
    Background I'm currently developing a simple library that mirrors Game Center's functionalities locally. Basically, this library is a system that manages achievements and leaderboards, and optionally sync it with the Game Center. So, if the game is not GC enabled, the game will still have achievements and leaderboards (stored inside a plist). But of course, the leaderboards will then only contain the local player's scores (which is kind of useless, I know :P). Problem Currently I have coded both of the achievements and leaderboards subsystems. The achievements subsystem have already been tested and it works. I'm currently testing the leaderboards subsystem using multiple test user accounts. I loaded the test app on a device and on the simulator, both logged in with 2 different user accounts. Then I performed these steps: I first used the device to upload a score. Then, I ran the simulator, and the score submitted by the user on the device is shown. Which is cool. Then, I used the simulator to upload a score. But on the device, still, only one score is listed. I checked on the Game Center app (to see if the bug lies within my code), and I got the same thing. Under "All players", there is only one score on the device, but there are 2 scores on the simulator. I wanted to make sure that the simulator is not causing this, so I swapped the users on the device and the simulator, and the result is still the same. In other words, the first user is oblivious of the second user's score, but the second user can see the first user's score. Then I tried with a third user. The result: the third user can only see the scores of the first user and himself. The second user still sees the scores of the first user and himself. The first user only sees his own score. Now here comes the weird part. I then make the first user and the second user befriend each other. The result: under "Friends", the first user can see the second user's score, but under "All Players", the first user's score is the only one listed. Screenshots The first user sees this: The second user sees this: So, is this a normal thing when using sandboxed GC accounts? Is this behavior documented somewhere by Apple?

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >