Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 511/1274 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Ouya / Android : button mapping bitwise

    - by scorvi
    I am programming a game with the Gameplay3d Engine. But the Android site has no gamepad support and that is what I need to port my game to Ouya. So I implemented a simple gamepad support and it supports 2 gamepads. So my problem is that I put the button stats in a float array for every gamepad. But the Gameplay3d engine saves their stats in a unsigned int _buttons variable. It is set with bitwise operations and I have no clue how to translate my array to this.

    Read the article

  • From simple physics with a ball, to a more complicated shape

    - by Maximus
    Hello fellow game devs and stack overflowers... I recently made a transition from OpenGL ES 1.1 to 2.0 (on Android via NDK) and things are going well so far. I'm working on doing a dice rolling application (gaming dice up to 20 sided, not just regular 6 sided die) as a way to learn more about how physics is implemented in a gaming environment. I've explored implementing existing engine options (such as Bullet) and I don't think I need to implement something quite so sophisticated. I've found several tutorials that handle a lot of the general physics involved with initial trajectory, velocity, angle of contact and reflection angle, etc. I'm confident that I'd be able to implement ball-like behavior without much trouble. My question lies in when I attempt to make the interaction of the die shape with another surface more "realistic," for example... the die strikes the floor surface at such an angle where only one corner makes contact with the floor. In my mind, the center of gravity of the object would play a part in determining how the die bounces away, possibly even spinning it it faster, etc... but I am not sure what the actual math involved is. Are there any recommended resources for getting into this level of detail? Initial searches haven't turned up much... Thanks to everyone in the community, -Jeremiah

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • GUI device for throwing a ball

    - by Fredrik Johansson
    The hero has a ball, which shall be thrown with accuracy in a court on iPhone/iPad. The player is seen from above, in a 2D view. In game play, the player reach is between 1/15 and 1/6 of the height of the iPhone screen. The player will run, and try to outmaneuver his opponent, and then throw the ball at a specific location, which is guarded by the opponent (which is also shown on the screen). The player is controlled by a joystick, and that works ok, but how shall I control the stick? Maybe someone can propose a third control method? I've tried the following two approaches: Joystick: Hero has a reach of 1 meter, and this reach is marked with a semi-opaque circle around the player. The ball can be moved by a joystick. When the joystick is moved south, the ball is moved south within the reach circle. There is a direct coupling with the joystick and the position of the ball. I.e. when the joystick is moved max south, the ball is max south within the player reach. At each touch update the speed is calculated, and the Box2d ball position and ball speed are updated. NB, the ball will never be moved outside the reach as long as the player push the joystick. The ball is thrown by swiping the joystick to make the ball move, and then releasing the joystick. At release, the ball will get a smoothed speed of the joystick. Joystick Problem: The throwing accuracy gets bad, because the joystick can not be that big, and a small movement results in quite a large movement of the ball. If the user does not release before the end of the joystick maximum end point, the ball will stop, and when the user releases the joystick the speed of the ball will be zero. Bad... Touch pad A force is applied to the ball by a sweep on a touchpad. The ball is released when the sweep is ended, or when the ball is moved outside the player reach. As there is no one to one mapping between the swipe and the ball position, the precision can be improved. A large swipe can result in a small ball movement. Touch Pad Problem A touchpad is less intuitive. Users do not seem to know what to do with the touch pad. Some tap the touchpad, and then the ball just falls to the ground. As there is no one-to-one mapping, the ball can be moved outside the reach, and then it will just fall to the ground. It's a bit hard to control the ball, especially if the player also moves.

    Read the article

  • effect and model vertex declaration compatibility

    - by Vodácek
    I have normal model drawing code. When I try to draw model without UV coordinates I got this exception: System.InvalidOperationException: The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. at Microsoft.Xna.Framework.Graphics.GraphicsDevice.VerifyCanDraw( Boolean bUserPrimitives, Boolean bIndexedPrimitives) at Microsoft.Xna.Framework.Graphics.GraphicsDevice.DrawIndexedPrimitives( PrimitiveType primitiveType, Int32 baseVertex, Int32 minVertexIndex, Int32 numVertices, Int32 startIndex, Int32 primitiveCount) at Microsoft.Xna.Framework.Graphics.ModelMeshPart.Draw() at Microsoft.Xna.Framework.Graphics.ModelMesh.Draw() ... I know what cause the exception, but is possible to avoid it? Is possible to check model before drawing it with current shader for vertex declaration compatibility?

    Read the article

  • Can I run into legal issues with random names?

    - by Nathan Sabruka
    I'm currently building a game whose NPC's are going to be assigned a random gender and a random name for the right gender. To do this I will be using a "database" of names (actually a text file with tuples). There would also be a list of last names, which will be added to the first name also randomly. My question is the following. Suppose one such random name is "George Bush", and this person has been randomly assigned the job of president. As you can see, this could easily be seen as having been "copied" from a real-life person. The main issue is this. Names will be randomly-generated, yes, but the seed for random-number generation will be constant. In other words, the name of an NPC would be randomly-generated, i.e. I wouldn't choose it, but it would be the same for every player. Could this get me in trouble? We cannot verify all possible names, since the generated number of NPC's could be potentially limitless (new NPC's are being created whenever needed).

    Read the article

  • GameplayScreen does not contain a definition for GraphicsDevice

    - by Dave Voyles
    Long story short: I'm trying to intergrate my game with Microsoft's Game State Management. In doing so I've run into some errors, and the latest one is in the title. I'm not able to display my HUD for the reasons listed above. Previously, I had much of my code in my Game.cs class, but the GSM has a bit of it in Game1, and most of what you have drawn for the main screen in your GameplayScreen class, and that is what is causing confusion on my part. I've created an instance of the GameplayScreen class to be used in the HUD class (as you can see below). Before integrating with the GSM however, I created an instance of my Game class, and all worked fine. It seems that I need to define my graphics device somewhere, but I am not sure of where exactly. I've left some code below to help you understand. public class GameStateManagementGame : Microsoft.Xna.Framework.Game { #region Fields GraphicsDeviceManager graphics; ScreenManager screenManager; // Creates a new intance, which is used in the HUD class public static Game Instance; // By preloading any assets used by UI rendering, we avoid framerate glitches // when they suddenly need to be loaded in the middle of a menu transition. static readonly string[] preloadAssets = { "gradient", }; #endregion #region Initialization /// <summary> /// The main game constructor. /// </summary> public GameStateManagementGame() { Content.RootDirectory = "Content"; graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; graphics.IsFullScreen = false; graphics.ApplyChanges(); // Create the screen manager component. screenManager = new ScreenManager(this); Components.Add(screenManager); // Activate the first screens. screenManager.AddScreen(new BackgroundScreen(), null); //screenManager.AddScreen(new MainMenuScreen(), null); screenManager.AddScreen(new PressStartScreen(), null); } namespace Pong { public class HUD { public void Update(GameTime gameTime) { // Used in the Draw method titleSafeRectangle = new Rectangle (GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.X, GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.Y, GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.Width, GameplayScreen.Instance.GraphicsDevice.Viewport.TitleSafeArea.Height); } } } class GameplayScreen : GameScreen { #region Fields ContentManager content; public static GameStates gamestate; private GraphicsDeviceManager graphics; public int screenWidth; public int screenHeight; private Texture2D backgroundTexture; private SpriteBatch spriteBatch; private Menu menu; private SpriteFont arial; private HUD hud; Animation player; // Creates a new intance, which is used in the HUD class public static GameplayScreen Instance; public GameplayScreen() { TransitionOnTime = TimeSpan.FromSeconds(1.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } protected void Initialize() { lastScored = false; menu = new Menu(); resetTimer = 0; resetTimerInUse = true; ball = new Ball(content, new Vector2(screenWidth, screenHeight)); SetUpMulti(); input = new Input(); hud = new HUD(); // Places the powerup animation inside of the surrounding box // Needs to be cleaned up, instead of using hard pixel values player = new Animation(content.Load<Texture2D>(@"gfx/powerupSpriteSheet"), new Vector2(103, 44), 64, 64, 4, 5); // Used by for the Powerups random = new Random(); vec = new Vector2(100, 50); vec2 = new Vector2(100, 100); promptVec = new Vector2(50, 25); timer = 10000.0f; // Starting value for the cooldown for the powerup timer timerVector = new Vector2(10, 10); //JEP - one time creation of powerup objects playerOnePowerup = new Powerup(); playerOnePowerup.Activated += PowerupActivated; playerOnePowerup.Deactivated += PowerupDeactivated; playerTwoPowerup = new Powerup(); playerTwoPowerup.Activated += PowerupActivated; playerTwoPowerup.Deactivated += PowerupDeactivated; //JEP - moved from events since these only need set once activatedVec = new Vector2(100, 125); deactivatedVec = new Vector2(100, 150); powerupReady = false; }

    Read the article

  • Unity3D: default parameters in C# script

    - by Heisenbug
    Accordingly to this thread, it seems that default parameters aren't supported by C# script in Unity3D environment. Declaring an optional parameter in a C# scirpt makes Mono Ide complaint about it: void Foo(int first, int second = 10) // this line is marked as wrong inside Mono IDE Anyway if I ignore the error message of Mono and run the script in Unity, it works without notify any error inside Unity Console. Could anyone clarify a little bit this issue? Particularly: Are default parameters allowed inside C# scripts? If yes, are they supported by all platforms? Why Mono complains about them if the actually works?

    Read the article

  • What collision detection approach for top down car game?

    - by nathan
    I have a quite advanced top down car game and i use masks to detect collisions. I have the actual designed track (what the player see) with fancy graphics etc. and two other pictures i use as mask for my detection collisions. Each mask has only two colors, white and black and i check each frame if a pixel of the car collide with a black pixel of the masks. This approach works of course but it's not really flexible. Whenever i want to change the look of a track, i have to redraw the mask and it's a real pain. What is the general approach for this kind of game? How can i improve the flexibility of such a mask based approach?

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Fixed timestep and interpolation question

    - by Eric
    I'm following Glenn Fiedlers excellent Fix Your Timestep! tutorial to step my 2D game. The problem I'm facing is in the interpolation phase in the end. My game has a Tween-function which lets me tween properties of my game entites. Properties such as scale, shear, position, color, rotation etc. Im curious of how I'd interpolate these values, since there's a lot of them. My first thought is to keep a previous value of every property (colorPrev, scalePrev etc.), and interpolate between those. Is this the correct method? To interpolate my characters I use their velocity; renderPostion = position + (velocity * interpolation), but I cannot apply that to color for example. So what is the desired method to interpolate various properties or a entity? Is there any rule of thumb to use?

    Read the article

  • Animation file format

    - by Paul
    I'm trying to make a simple 2D animation file format. It'll be very rudimentary: only an XML file containing some parameters (such as frame duration) and metadata, and some images, each representing a frame. I'd like to have the whole animation (frames and XML document) packed in a single file. How do you suggest I do that? What libraries are there that would allow easy access to the files inside the animation file itself? The language I'm using is C++ and the platform is Windows, but I'd rather not use a platform dependent library, if possible.

    Read the article

  • convert orientation vec3 to a rotation matrix

    - by lapin
    I've got a normalized vec3 that represents an orientation. Each frame of animation, an object's orientation changes slightly, so I add a delta vector to the orientation vector and then normalize to find the new orientation. I'd like to convert the vec3 that represents an orientation into a rotation matrix that I can use to orient my object. If it helps, my object is a cone, and I'd like to rotate it about the pointy end, not from its center :) PS I know I should use quaternions because of the gimbal lock problem. If someone can explain quats too, that'd be great :)

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • Colorize with a given color a texture

    - by Pacha
    I have a texture and I want to "colorize" it with a given color, lets say cyan (#00ffff) or purple (#800080). What I want to do, is get all the pixel values from the texture, and remove the color and keep the "brightness" and "saturation" and apply to the desired color. There is a tool in GIMP to do this called Colorize (Colors -> Colorize.. while editing), I made an example below. This is will all be done in a shader (GLSL), although this is probably a general algorithm.

    Read the article

  • Component-based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • Vertex fog producing black artifacts

    - by Nick
    I originally posted this question on the XNA forums but got no replies, so maybe someone here can help: I am rendering a textured model using the XNA BasicEffect. When I enable fog, the model outline is still visible as many small black dots when it should be "in the fog". Why is this happening? Here's what it looks like for me -- http://tinypic.com/r/fnh440/6 Here is a minimal example showing my problem: (the ship model that this example uses is from the chase camera sample on this site -- http://xbox.create.msdn.com/en-US/education/catalog/sample/chasecamera -- in case anyone wants to try it out ;)) public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Model model; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here model = Content.Load<Model>("ship"); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.FogEnabled = true; be.FogColor = Color.CornflowerBlue.ToVector3(); be.FogStart = 10; be.FogEnd = 30; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here model.Draw(Matrix.Identity * Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(3 * MathHelper.PiOver4), Matrix.CreateLookAt(new Vector3(0, 0, 30), Vector3.Zero, Vector3.Up), Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 16f/9f, 1, 100)); base.Draw(gameTime); } }

    Read the article

  • Achieving certain rendering styles

    - by milesmeow
    I'm trying to assess the difficulty of creating a rendering style that is more like the game Okami and the Quake mods (as shown on this page...search for 'okami','quake npr'). Here's a better page describing the Quake rendering mod. Can a game engine such as Unity be used and programmed to achieve these kind of rendering styles? I'm doing research and am totally new to this so any insight into this would help tremendously.

    Read the article

  • How to manage own bots at the server?

    - by Nikolay Kuznetsov
    There is a game server and people can play in game rooms of 2, 3 or 4. When a client connects to server he can send a request specifying a number of people or range he wants to play with. One of this value is valid: {2-4, 2-3, 3-4, 2, 3, 4} So the server maintains 3 separate queues for game room with 2, 3 and 4 people. So we can denote queues as #2, #3 and #4. It work the following way. If a client sends request, 3-4, then two separate request are added to queues #3 and #4. If queue #3 now have 3 requests from different people then game room with 3 players is created, and all other requests from those players are removed from all queues. Right now not many people are online simultaneously, so they apply for a game wait for some time and quit because game does not start in a reasonable time. That's a simple bot for beginning has been developed. So there is a need to patch server code to run a bot, if some one requests a game, but humans are not online. Input: request from human {2-4, 2-3, 3-4, 2, 3, 4} Output: number of bots to run and time to wait for each before connecting, depending on queues state. The problem is that I don't know how to manage bots properly at the server? Example: #3 has 1 request and #4 has 1 request Request from user is {3,4} then server can add one bot to play game with 3 people or two bots to play game of 4. Example: #3 has 1 request and #4 has 2 requests Request from user is {3,4} then in each case just one bot is needed so game with 4 players is more preferrable.

    Read the article

  • How can I make the camera return to the beginning of the terrain when it reaches the end?

    - by wbaccari
    How can I make the camera return to the beginning of the terrain when it reaches the end? I tried using the ICameraSceneNode*-setPosition(). if (camera->getPosition().X>1200.f) camera->setPosition(vector3df(1.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().X<0.f) camera->setPosition(vector3df(1199.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().Z>1200.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1.f)); if (camera->getPosition().Z<0.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1199.f)); It seems to work fine with a flat terrain (one shade of grey in heightmap) but it starts to produce a strange behavior as soon as i try to add some hills. Edit: The setPosition() call seems to perform a translation of the camera toward the new position, therefore the camera stops at the first obstacle it encounters on its way.

    Read the article

  • Calculating 2D (screen) coordinates from 3D positions in XNA 4.0

    - by NDraskovic
    I have a program that draws some items to the scene by loading their positions from a file. Now I want to place a Ray on the same location where the items are drawn. So my question is how can I calculate the position of the ray (it's 2D components) by using 3D coordinates of each particular item? The items don't move anywhere, so once they are placed they stay until the end of the programs execution. Thanks.

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Jumping with Mecanim synchronization

    - by Abhishek Deb
    I am using Unity3D 4.1 for one of my projects. I have a robot character which is always running. I am using mecanim animation system. What I really want:When I press Space bar, the character should jump up in the air, triggering an animation clip and then by the time it reaches the ground, the animation clip should also end. What actually is happening:When I press Space bar, the character jumps in the air. Animation clip plays as it should, but ends way before it reaches the ground. So, it looks like he is running in the mid air. What have I done: I have this humanoid robot setup with a jump animation bounded with the space bar key. Also, instead of using root motion, I am directly moving the robot from code. //Jumping if(Input.GetKeyDown(KeyCode.Space)){ rigidbody.AddForce(Vector3.up*jumpVelocty); anim.SetBool("Jump",true); } else anim.SetBool("Jump",false); Character's Details: Rigidbody = Mass:30, Freeze rotaion:x,y,z Capsule Collider = Material: metal, center(0,4.5,0), radius:1, height:11 Script = jumpVelocity:20000 Jump Animation Clip: ~ 2 seconds. I am really out of ideas how to synchronize everything. Should I make the character jump in some other way so that it quickly comes down and touches the ground to match the animation clip? If yes, please provide a direction.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >