Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 560/1758 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • Rotating multiple points at once in 2D

    - by Deukalion
    I currently have an editor that creates shapes out of (X, Y) coordinates and then triangulate that to make up a shape of those points. What will I have to do to rotate all of those points simultaneously? Say I click the screen in my editor, it locates the point where I've clicked and if I move the mouse up or down from that point it calculates rotation on X and Y axis depending on new position relevant to first position, say I move up 10 on the Y axis it rotates that way and the same way for X. Or simply, somehow to enter rotation degree: 90, 180, 270, 360, for example. I use VertexPositionColor at the moment. What are the best algorithms or methods that I can look at to rotate multiple points in 2D at once? Also: Since this is an editor I do now want to rotate it on the Matrix, so if I want to rotate the whole shape 180 degree that's the new "position" of all the points, so that's the new rotation = 0 for example. Later on I probably will use World Matrix rotation for this, but not now.

    Read the article

  • Multiple textures on a mesh created in blender and imported in xna

    - by alecnash
    I created a cube in blender which has multiple images applied to its faces. I am trying to import the model into xna and get the same results as shown when rendering the model in blender. I go through every mesh (for the cube its only one) and through every part but only the first image used in blender is displayed in every face. The code I am using to fetch the texture looks like that: foreach (ModelMesh m in model.Meshes) { foreach (Effect e in m.Effects) { foreach (var part in m.MeshParts) { e.CurrentTechnique = e.Techniques["Lambert"]; e.Parameters["view"].SetValue(camera.viewMatrix); e.Parameters["projection"].SetValue(camera.projectionMatrix); e.Parameters["colorMap"].SetValue(modelTextures[part.GetHashCode()]); } } m.Draw(); } Am I missing something?

    Read the article

  • Looking for a small, light scene graph style abstraction lib for shader based OpenGL

    - by Pris
    I'm looking for a 'lean and mean' c/c++ scene graph library for OpenGL that doesn't use any deprecated functionality. It should be cross platform (strictly speaking I just dev on Linux so no love lost if it doesn't work on Windows), and it should be possible to deploy to mobile targets (ie OpenGLES2, and no crazy mandatory dependencies that wouldn't port well to modern mobile frameworks like iOS, Android, etc), with a license that's compatible with closed source software (LGPL or more liberal). Specific nice-to-haves would be: Cameras and Viewers (trackball, fly-by, etc) Object transform hierarchies (if B is a child of A, and you move A, B has the same transform applied to it) Simple animation Scene optimization (frustum culling, use VBOs, minimize state changes, etc) Text I've played around with OpenSceneGraph a lot and it's pretty amazing for fixed function pipeline stuff, but I've had a few of problems using it with the programmable pipeline and after going through their mailing list, it seems several people have had similar issues (going back years). Kitware's VES looks neat (http://www.vtk.org/Wiki/VES), but VES + VTK is pretty heavy. VTK is also typically for analyzing scientific data and I've read that it's not that appropriate for a general use case (not that great at rendering a lot of objects on scene,etc) I'm currently looking at VisualizationLibrary (http://www.visualizationlibrary.org/documentation/pag_gallery.html) which looks like it offers some of the functionality I'd like, but it doesn't explicitly support mobile targets. Other solutions like Ogre, Horde3D, Irrlicht, etc tend to be full on game engines and that's not really what I'm looking for. I'd like some suggestions for other libraries that I may have missed... please note I'm not willing to roll my own solution from scratch.

    Read the article

  • Question about BoundingSpheres and Ray intersections

    - by NDraskovic
    I'm working on a XNA project (not really a game) and I'm having some trouble with picking algorithm. I have a few types of 3D models that I draw to the screen, and one of them is a switch. So I'm trying to make a picking algorithm that would enable the user to click on the switch and that would trigger some other function. The problem is that the BoundingSphere.Intersect() method always returns null as result. This is the code I'm using: In the declaration section: ` //Basic matrices private Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0)); private Matrix view = Matrix.CreateLookAt(new Vector3(10, 10, 10), new Vector3(0, 0, 0), Vector3.UnitY); private Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45), 800f / 600f, 0.01f, 100f); //Collision detection variables Viewport mainViewport; List<BoundingSphere> spheres = new List<BoundingSphere>(); Ray ControlRay; Vector3 nearPoint, farPoint, nearPlane, farPlane, direction; ` And then in the Update method: ` nearPlane = new Vector3((float)Mouse.GetState().X, (float)Mouse.GetState().Y, 0.0f); farPlane = new Vector3((float)Mouse.GetState().X, (float)Mouse.GetState().Y, 10.0f); nearPoint = GraphicsDevice.Viewport.Unproject(nearPlane, projection, view, world); farPoint = GraphicsDevice.Viewport.Unproject(farPlane, projection, view, world); direction = farPoint - nearPoint; direction.Normalize(); ControlRay = new Ray(nearPoint, direction); if (spheres.Count != 0) { for (int i = 0; i < spheres.Count; i++) { if (spheres[i].Intersects(ControlRay) != null) { Window.Title = spheres[i].Center.ToString(); } else { Window.Title = "Empty"; } } ` The "spheres" list gets filled when the 3D object data gets loaded (I read it from a .txt file). For every object marked as switch (I use simple numbers to determine which object is to be drawn), a BoundingSphere is created (center is on the coordinates of the 3D object, and the diameter is always the same), and added to the list. The objects are drawn normally (and spheres.Count is not 0), I can see them on the screen, but the Window title always says "Empty" (of course this is just for testing purposes, I will add the real function when I get positive results) meaning that there is no intersection between the ControlRay and any of the bounding spheres. I think that my basic matrices (world, view and projection) are making some problems, but I cant figure out what. Please help.

    Read the article

  • How to find the window size in XNA

    - by Nick Van Hoogenstyn
    I just wanted to know if there was a way to find out the size of the window in XNA. I don't want to set it to a specific size; I would like to know what dimensions it currently displays as automatically. Is there a way to find this information out? I realize I probably should have found this information out (or set it myself manually) before working on the game, but I'm a novice and am now hoping to work within the dimensions I have already become invested in. Thanks!

    Read the article

  • How to move a line of sprites in a sine wave?

    - by electroflame
    So, I'm spawning a horizontal line of enemies that I would like to have move in a nice wave. Currently I tried: Enemy.position.X += Enemy.velocity.X; Enemy.position.Y += -(float)Math.Cos(Enemy.position.X / 200) * 5; This...kind of works. But the wave is not a true wave. The top and bottom of one pass are not the same (e.g. 5 for the top, and -5 for the bottom (I don't mean literal points, I just meant that it's not symmetrical)). Is there a better way to do this? I would like the whole line to move in a wave, so it looks fluid. By that, I mean that it should look like each enemy is "following" the one in front of it. The code I posted does have this fluidity to it, but like I said, it's not a perfect wave. Any ideas? Thanks in advance.

    Read the article

  • How do I tackle top down RPG movement?

    - by WarmWaffles
    I have a game that I am writing in Java. It is a top down RPG and I am trying to handle movement in the world. The world is largely procedural and I am having a difficult time tackling how to handle character movement around the world and render the changes to the screen. I have the world loaded in blocks which contains all the tiles. How do I tackle the character movement? I am stumped and can't figure out where I should go with it. EDIT: Well I was abstract with the issue at hand. Right now I can only think of rigidly sticking everything into a 2D array and saving the block ID and the player offset in the Block or I could just "float" everything and move about between tiles so to speak.

    Read the article

  • iOS persistant storage with update function

    - by jernej
    im developing a game which has different levels and i need to store all levels and its elements (position, image, sounds,..) into a file/database. The levels will be updated so i need a function that checks online for a update and downloads a database dump and additional files. I was planing to store all the persistent data into a SQLLite database, but not quite sure how to do the update part - to pack the database dump and the files together (in a .zip or with a xml). Can this be done any other way (as secure as possible)? thanks!

    Read the article

  • Interpolation gives the appearance of collisions

    - by Akroy
    I'm implementing a simple 2D platformer with a constant speed update of the game logic, but with the rendering done as fast as the machine can handle. I interpolate positions between actual game updates by just using the position and velocity of objects at the last update. This makes things look really smooth in general, but when something hits a wall/floor, it appears to go through the wall for a moment before being positioned correctly. This is because the interpolator is not taking walls into account, so it guesses the position into walls until the actual game update fixes it. Are there any particularly elegant solutions for this? Simply increasing the update rate seems like a band-aid solution, and I'm trying to avoid increasing the system reqs. I could also check for collisions in the actual interpolator, but that seems like heavy overhead, and then I'm no longer dividing the drawing and the game updating.

    Read the article

  • Drawing lots of tiles with OpenGL, the modern way

    - by Nic
    I'm working on a small tile/sprite-based PC game with a team of people, and we're running into performance issues. The last time I used OpenGL was around 2004, so I've been teaching myself how to use the core profile, and I'm finding myself a little confused. I need to draw in the neighborhood of 250-750 48x48 tiles to the screen every frame, as well as maybe around 50 sprites. The tiles only change when a new level is loaded, and the sprites are changing all the time. Some of the tiles are made up of four 24x24 pieces, and most (but not all) of the sprites are the same size as the tiles. A lot of the tiles and sprites use alpha blending. Right now I'm doing all of this in immediate mode, which I know is a bad idea. All the same, when one of our team members tries to run it, he gets very bad frame rates (~20-30 fps), and it's much worse when there are more tiles, especially when a lot of those tiles are the kind that are cut into pieces. This all makes me think that the problem is the number of draw calls being made. I've thought of a few possible solutions to this, but I wanted to run them by some people who know what they're talking about so I don't waste my time on something stupid: TILES: When a level is loaded, draw all the tiles once into a frame buffer attached to a big honking texture, and just draw a big rectangle with that texture on it every frame. Put all the tiles into a static vertex buffer when the level is loaded, and draw them that way. I don't know if there's a way to draw objects with different textures with a single call to glDrawElements, or if this is even something I'd want to do. Maybe just put all the tiles into a big giant texture and use funny texture coordinates in the VBO? SPRITES: Draw each sprite with a separate call to glDrawElements. Use a dynamic VBO somehow. Same texture question as number 2 above. Point sprites? This is probably silly. Are any of these ideas sensible? Is there a good implementation somewhere I could look over?

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • How to convert pitch and yaw to x, y, z rotations?

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( So my quesiton is: how do you calculate the X, Y, and Z rotations for an object in terms of pitch and yaw? Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • Drawing order in XNA

    - by marc wellman
    When manually setting the drawing order of game components by setting int DrawableGameComponent.DrawOrder can one use any integer numbers as long an order is defined like component1 = drawing order: 2 component2 = drawing order: 5 component3 = drawing order: 10 component4 = drawing order: 323 or do these integers have to be consecutive and starting with zero like component1 = drawing order: 0 component2 = drawing order: 1 component3 = drawing order: 2 component4 = drawing order: 3 ?

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • Rotate 3D Model from a custom position

    - by Nipuna Silva
    I have a 3D Model like above in which i want to rotate it from a given location(pointed in red) but I can only rotate it from the middle. How can I rotate it from a custom point. Edit: I successfully able to rotate the model from the below position by getting the radius of the model and applying it to the world matrix Vector3 point = new Vector3(-radius, 0, 0); world = Matrix.CreateTranslation(-radius, 0, 0); But now I cannot change the position of the object and it always centered in middle of the screen. I think that's because i applied the above code. How can I place it anywhere I want?

    Read the article

  • vector rotations for branches of a 3d tree

    - by freefallr
    I'm attempting to create a 3d tree procedurally. I'm hoping that someone can check my vector rotation maths, as I'm a bit confused. I'm using an l-system (a recursive algorithm for generating branches). The trunk of the tree is the root node. It's orientation is aligned to the y axis. In the next iteration of the tree (e.g. the first branches), I might create a branch that is oriented say by +10 degrees in the X axis and a similar amount in the Z axis, relative to the trunk. I know that I should keep a rotation matrix at each branch, so that it can be applied to child branches, along with any modifications to the child branch. My questions then: for the trunk, the rotation matrix - is that just the identity matrix * initial orientation vector ? for the first branch (and subsequent branches) - I'll "inherit" the rotation matrix of the parent branch, and apply x and z rotations to that also. e.g. using glm::normalize; using glm::rotateX; using glm::vec4; using glm::mat4; using glm::rotate; vec4 vYAxis = vec4(0.0f, 1.0f, 0.0f, 0.0f); vec4 vInitial = normalize( rotateX( vYAxis, 10.0f ) ); mat4 mRotation = mat4(1.0); // trunk rotation matrix = identity * initial orientation vector mRotation *= vInitial; // first branch = parent rotation matrix * this branches rotations mRotation *= rotate( 10.0f, 1.0f, 0.0f, 0.0f ); // x rotation mRotation *= rotate( 10.0f, 0.0f, 0.0f, 1.0f ); // z rotation Are my maths and approach correct, or am I completely wrong? Finally, I'm using the glm library with OpenGL / C++ for this. Is the order of x rotation and z rotation important?

    Read the article

  • How do I dynamically reload content files?

    - by Kikaimaru
    Is there a relatively simple way to dynamically reload content files, such as effect files? I know I can do the following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load all content And use double references to reference content files. The problem is with step 3 (and step 2 isn't that nice either). I need to unload everything because if I have model Hero.x which references Model.fx effect, and I change the Model.fx file, I need to reload the Hero.x file which will then call LoadExternalReference on Model.fx. Has someone managed to make this work without rewriting the whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Inverse projection: question about w coordinate

    - by fayeWilly
    I have to perform in shader an inverse projection from a u/v of a render target. What I do is: Get NDC as 2*(u,v,depth) - 1 Then world space as tmp = (P*V)^-1 * (NDC,1.0); world space = tmp/tmp.w; This apparently works, but I am confused about the w division there. Why this work? Shouldn't be a multiplication by a w somewhere (as in the "forward" pipeline there is the perpsective division?) Thank you, Faye

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • How to modify VBO data

    - by Romeo
    I am learning LWJGL so i can start working on my game. In order to learn LWJGL I got the idea to implement the map builder so I can get comfortable with graphics programming. Now, for the map creation tool I need to draw new elements or draw the old one's with different coordinates. Let me explain this: My game will be a 2D scroller. The map will be consisting of multiple rectangles ( 2 strip triangles). When I click my left-mouse button i want to start the rectangle and when I release it I want to stop the rectangle bottom-right at that position. As I want to use VBOs I want to know how to modify data inside the VBO based on user input. Should i have a copy of a vertex array and then add the whole array to the VBO at each user input? How is usually implemented the VBO update?

    Read the article

  • How can I gain access to a player instance in a Minecraft mod?

    - by Andrew Graber
    I'm creating Minecraft mod with a pickaxe that takes away experience when you break a block. The method for taking away experience from a player is addExperience on EntityPlayer, so I need to get an instance of EntityPlayer for the player using my pickaxe when the pickaxe breaks a block, so that I can remove the appropriate amount of experience. My pickaxe class currently looks like this: public class ExperiencePickaxe extends ItemPickaxe { public ExperiencePickaxe(int ItemID, EnumToolMaterial material){ super(ItemID, material); } public boolean onBlockDestroyed(ItemStack par1ItemStack, World par2World, int par3, int par4, int par5, int par6, EntityLiving par7EntityLiving) { if ((double)Block.blocksList[par3].getBlockHardness(par2World, par4, par5, par6) != 0.0D) { EntityPlayer e = new EntityPlayer(); // create an instance e.addExperience(-1); } return true; } } Obviously, I cannot actually create a new EntityPlayer since it is an abstract class. How can I get access to the player using my pickaxe?

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • Asked to make a 2d platformer [on hold]

    - by Fendorio
    I've been tasked with creating a simple 2D platformer top be put on a webpage. The game is pretty much a simple Super Mario type game. I've been playing around with C# and C++ now for a couple years, so I'm aware that Unity offers a route to making a web game but for such a simplistic project i'm afraid that using unity would be overkill... i.e. slow, nobody wants to install the web player for a game with < 5 mins playtime. Html5 canvas/JS seems to jump out at me over flash, as that seems to be being pushed out. Any suggestions on a route to take would be greatly appreciated

    Read the article

  • Using a permutation table for simplex noise without storing it

    - by J. C. Leitão
    Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation_table(seed): table_size = 2**10 # arbitrary for this question l = range(1, table_size + 1) random.seed(seed) # ensures the same shuffle for a given seed random.shuffle(l) return l + l # see shared link why l + l; is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table[i] (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get_table_element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >