Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 534/1292 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • How important is a single-player mode in a 2-player game?

    - by Davy8
    So say you have a 2 player game, taking Chess as an example (except it's an original game with no ready-to-go AI available). Let's say there's also a social-aspect to the meta-game, so let's say it's a Chess game on Facebook where you can challenge your friends. How important is it to have a single-player mode, knowing that an AI will need to be created (I've done minimax AI for tic tac toe, but nothing too sophisticated)? Is it important enough that it should be in the initial launch of the game? Can it wait for a future iteration (knowing that being hosted on the web means the game can be updated at any time)?

    Read the article

  • Why don't C++ Game Developers use the boost library?

    - by James
    So if you spend any time viewing / answering questions over on Stack Overflow under the C++ tag, you will quickly notice that just about everybody uses the boost library; some would even say that if you aren't using it, you're not writing "real' C++ (I disagree, but that's not the point). But then there is the game industry, which is well known for using C++ and not using boost. I can't help but wonder why that is. I don't care to use boost because I write games (now) as a hobby, and part of that hobby is implementing what I need when I am able to and using off-the-shelf libraries when I can't. But that is just me. Why don't game developers, in general, use the boost library? Is it performance or memory concerns? Style? Something Else? I was about to ask this on stack overflow, but I figured the question is better asked here. EDIT : I realize I can't speak for all game programmers and I haven't seen all game projects, so I can't say game developers never use boost; this is simply my experience. Allow me to edit my question to also ask, if you do use boost, why did you choose to use it?

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

  • How to implement explosion in OpenGL with a particle effect?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • How do I find the closest points(thereby forming a polygon) enclosing a particular point?(see image)

    - by nilspin
    I am working with a game engine, and my task is to add code for simulating fracture of rigid meshes. Right now I'm only working on breaking a cube. I am using Voronoi's algorithm to make a (realistic)fractured shard and I am using the half-plane method to generate a voronoi cell. Now the way I do this is for every seed point, I make planes that are perpendicular bisector planes(the straight black lines in the image) with rest of the seed points and I calculate the intersections of all these planes to give me distinct points(all the orange dots). I've gotten this far. Out of all these calculated intersection points, I only need the ones that are closest and enclosing the seed point(the points encircled in red) and I need to discard all the rest. Information that I have : 1) Plane equations of all planes(defined by normalized normal vectors and their distance from origin) 2) Points of intersection(that I've calculated) Can anybody help me find out how I can find the points encircled in red? Thanks.

    Read the article

  • Linear search vs Octree (Frustum cull)

    - by Dave
    I am wondering whether I should look into implementing an octree of some kind. I have a very simple game which consists of a 3d plane for the floor. There are multiple objects scattered around on the ground, each one has an aabb in world space. Currently I just do a loop through the list of all these objects and check if its bounding box intersects with the frustum, it works great but I am wondering if if it would be a good investment in an octree. I only have max 512 of these objects on the map and they all contain bounding boxes. I am not sure if an octree would make it faster since I have so little objects in the scene.

    Read the article

  • Designing a "Grid" like object that contains game objects

    - by liortal
    I am working on a 2D game, where there's a game "board" on which other game objects are placed. This this is 2D, my starting point was to design a class that will internally use a 2d array for the actual stored game objects. This class could be simply accessed by 2 indices: (i, j) to get game objects on it. My problem is that i have no idea how to make the game "board" "propagate" its data onto its children. Design questions i ran into are: Should the children placed on the board have display properties such as size, screen position? Should the board itself dictate this information? How to update children in case the board changes some of its properties? (position, etc). Should the board be aware of the types of objects stored in it ? I have no idea how similar things such as WPF or other UI frameworks go about organizing a "container like" object that can arrange or apply certain UI properties to its children.

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • 2D isometric picking

    - by Bikonja
    I'm trying to implement picking in my isometric 2D game, however, I am failing. First of all, I've searched for a solution and came to several, different equations and even a solution using matrices. I tried implementing every single one, but none of them seem to work for me. The idea is that I have an array of tiles, with each tile having it's x and y coordinates specified (in this simplified example it's by it's position in the array). I'm thinking that the tile (0, 0) should be on the left, (max, 0) on top, (0, max) on the bottom and (max, max) on the right. I came up with this loop for drawing, which googling seems to have verified as the correct solution, as has the rendered scene (ofcourse, it could still be wrong, also, forgive the messy names and stuff, it's just a WIP proof of concept code) // Draw code int col = 0; int row = 0; for (int i = 0; i < nrOfTiles; ++i) { // XOffset and YOffset are currently hardcoded values, but will represent camera offset combined with HUD offset Point tile = IsoToScreen(col, row, TileWidth / 2, TileHeight / 2, XOffset, YOffset); int x = tile.X; int y = tile.Y; spriteBatch.Draw(_tiles[i], new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); col++; if (col >= Columns) // Columns is the number of tiles in a single row { col = 0; row++; } } // Get selection overlay location (removed check if selection exists for simplicity sake) Point tile = IsoToScreen(_selectedTile.X, _selectedTile.Y, TileWidth / 2, TileHeight / 2, XOffset, YOffset); spriteBatch.Draw(_selectionTexture, new Rectangle(tile.X, tile.Y, TileWidth, TileHeight), Color.White); // End of draw code public Point IsoToScreen(int isoX, int isoY, int widthHalf, int heightHalf, int xOffset, int yOffset) { Point newPoint = new Point(); newPoint.X = widthHalf * (isoX + isoY) + xOffset; newPoint.Y = heightHalf * (-isoX + isoY) + yOffset; return newPoint; } This code draws the tiles correctly. Now I wanted to do picking to select the tiles. For this, I tried coming up with equations of my own (including reversing the drawing equation) and I tried multiple solutions I found on the internet and none of these solutions worked. Trying out lots of solutions, I came upon one that didn't work, but it seemed like an axis was just inverted. I fiddled around with the equations and somehow managed to get it to actually work (but have no idea why it works), but while it's close, it still doesn't work. I'm not really sure how to describe the behaviour, but it changes the selection at wrong places, while being fairly close (sometimes spot on, sometimes a tile off, I believe never more off than the adjacent tile). This is the code I have for getting which tile coordinates are selected: public Point? ScreenToIso(int screenX, int screenY, int tileHeight, int offsetX, int offsetY) { Point? newPoint = null; int nX = -1; int nY = -1; int tX = screenX - offsetX; int tY = screenY - offsetY; nX = -(tY - tX / 2) / tileHeight; nY = (tY + tX / 2) / tileHeight; newPoint = new Point(nX, nY); return newPoint; } I have no idea why this code is so close, especially considering it doesn't even use the tile width and all my attempts to write an equation myself or use a solution I googled failed. Also, I don't think this code accounts for the area outside the "tile" (the transparent part of the tile image), for which I intend to add a color map, but even if that's true, it's not the problem as the selection sometimes switches on approx 25% or 75% of width or height. I'm thinking I've stumbled upon a wrong path and need to backtrack, but at this point, I'm not sure what to do so I hope someone can shed some light on my error or point me to the right path. It may be worth mentioning that my goal is to not only pick the tile. Each main tile will be divided into 5x5 smaller tiles which won't be drawn seperately from the whole main tile, but they will need to be picked out. I think a color map of a main tile with different colors for different coordinates within the main tile should take care of that though, which would fall within using a color map for the main tile (for the transparent parts of the tile, meaning parts that possibly belong to other tiles).

    Read the article

  • Map format for 3d open world

    - by Pacha
    I am making an open world 3d platformer in Ogre3D, and I have no idea on what kind of 3d map file format I should use for it. I want to make low-polygon blocky-style objects. Probably rectangles and other geometrical figures that don't have circular edges. Some of those blocks will have properties, like climbable or they might move. I was wondering what would be the best thing to do to make the map (just one level, as it is open).

    Read the article

  • How do you set up PhysFS for use in a game?

    - by ThePlan
    After my recent question on GD I've been advised to use PhysFS to pack all my game data in 1 file. So I have, and the decission wasn't light, because I've tried out every library in my answers but none contained a single good tutorial whatsoever, in fact PhysFS is the poorest documented library I've ever seen. After attempting to set up PhysFS in my game I realized it's not as simple as adding the headers to the project, it appears something much more complicated, in fact after my first attempt to install PhysFS the compiler ran out of memory to display errors, it reached the critical count of 50 errors. So basically what I'm asking here is: How can I set up PhysFS on my game? I'm using Code::Blocks IDE on Windows XP SP3;

    Read the article

  • How to optimise mesh data

    - by Wardy
    So i have some procedurally generated mesh data and i want to reduce it down to its minimum number of verts. In case it matters this is a unity project. Working on the basis of a simple example, lets assume a typical flat surface of points 2 by 3. The point / vertex at [1,1] is used in many triangles. I've generated mesh for a voxel type engine that adds verts to a list based on face visiblility and now I want to remove all the duplicates. Can anyone come up with an efficient way of doing this because what i have is sooo bad its not even funny (and i don't even think it's logically correct) ... private void Optimize() { Vector3 v; Vector3 v2; for (int i = 0; i < Vertices.Count; i++) { v = Vertices[i]; for (int j = i+1; j < Vertices.Count; j++) { v2 = Vertices[j]; if (v.x == v2.x && v.y == v2.y && v.z == v2.z) { for (int ind = 0; ind < Indices.Count; ind++) { if (Indices[ind] == j) { Indices[ind] = i; } else if (Indices[ind] > j && Indices[ind] > 0) Indices[ind]--; } Vertices.RemoveAt(j); Uvs.RemoveAt(j); Normals.RemoveAt(j); } } } } EDIT: Ok i managed to get this (code sample above updated) to render an "optimised" set of verts but the UV data is all wrong now, which would make sense because i'm basically just removing any UV Vector that represents a UV coord for a removed vert and not actually considering what I need to do to "fix the tri" so to speak. The code now seemingly does work but its quite time consuming, still looking to further optimise.

    Read the article

  • Pokemon Yellow wrap transitions

    - by Alex Koukoulas
    So I've been trying to make a pretty accurate clone of the good old Pokemon Yellow for quite some time now and one puzzling but nonetheless subtle mechanic has puzzled me. As you can see in the uploaded image there is a certain colour manipulation done in two stages after entering a wrap to another game location (such as stairs or entering a building). One easy (and sloppy) way of achieving this and the one I have been using so far is to make three copies of each image rendered on the screen all of them with their colours adjusted accordingly to match each stage of the transition. Of course after a while this becomes tremendously time consuming. So my question is does anyone know any better way of achieving this colour manipulation effect using java? Thanks in advance, Alex

    Read the article

  • 2-d lighting day/night cycle

    - by Richard
    Off the back of this post in which I asked two questions and received one answer, which I accepted as a valid answer. I have decided to re-ask the outstanding question. I have implemented light points with shadow casting as shown here but I would like an overall map light with no point/light source. The map setup is a top-down 2-d 50X50 pixel grid. How would I go about implementing a day/night cycle lighting across a map?

    Read the article

  • why are my players drawn to the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • How are components properly instantiated and used in XNA 4.0?

    - by Christopher Horenstein
    I am creating a simple input component to hold on to actions and key states, and a short history of the last ten or so states. The idea is that everything that is interested in input will ask this component for the latest input, and I am wondering where I should create it. I am also wondering how I should create components that are specific to my game objects - I envision them as variables, but then how do their Update/Draw methods get called? What I'm trying to ask is, what are the best practices for adding components to the proper collections? Right now I've added my input component in the main Game class that XNA creates when everything is first initialized, saying something along the lines of this.Components.Add(new InputComponent(this)), which looks a little odd to me, and I'd still want to hang onto that reference so I can ask it things. An answer to my input component's dilemma is great, but also I'm guessing there is a right way to do this in general in XNA.

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • Game engine like Unity 3D that allow me to use .NET code

    - by Pking
    I've been looking at Unity 3D for developing a 3D PC game and I really like the scene editor and how it simplifies the process of constructing 3D scenes, managing assets, animations, transitions etc. However, I don't want to restrict myself to using the Unity 3D scripts for handling every bit of game logic in the game. E.g. If I want to construct a RPG dialogue system I don't want to do it with unity 3d scripts - I'd like to use C#/.net. Also, I might want to use e.g. windows azure and sql azure as backend, and use 3rd party .net libraries such as reactive-extensions etc. Is there a .net engine out there that helps me with asset loading, animations, physics, transitions, etc. with a scene editor, but allow me to plug it into a visual studio .net project? Thanks

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • How do I design a game framework for fast reaction to user input?

    - by Miro
    I've played some games at cca 30 fps and some of them had low reaction time - cca 0.1sec. I hadn't knew why. Now when I'm designing my framework for crossplatform game, I know why. Probably they've been preparing new frame during rendering the previous. RENDER 1 | RENDER 2 | RENDER 3 | RENDER 4 PREPARE 2 | PREPARE 3 | PREPARE 4 | PREPARE 5 I see first frame when second frame is being rendered and third frame being prepared. If I react in that time to 1st frame it will result in forth frame. So it takes 3/FPS seconds to appear results. In 30 fps it would be 100ms, what is quite bad. So i'm wondering what should I design my framework to response to user interaction quickly?

    Read the article

  • How do I implement SkyBox in xna 4.0 Reach Profile (for Windows Phone 7)?

    - by Biny
    I'm trying to Implement SkyBox in my phone game. Most of the samples in the web are for HiDef profile, and they are using custom effects (that not supported on Windows Phone). I've tried to follow this guide. But for some reason my SkyBox is not rendered. This is my SkyBox class: using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Rocuna.Core; using Rocuna.GameEngine.Graphics; using Rocuna.GameEngine.Graphics.Components; namespace Rocuna.GameEngine.Extension.WP7.Graphics { /// <summary> /// Sky box element for phone games. /// </summary> public class SkyBox : SkyBoxBase { /// <summary> /// Initializes a new instance of the <see cref="SkyBoxBase"/> class. /// </summary> /// <param name="game">The Game that the game component should be attached to.</param> public SkyBox(TextureCube cube, Game game) : base(game) { Cube = cube; CubeFaces = new Texture2D[6]; PositionOffset = new Vector3(20, 20, 20); CreateGraphic(512); StripTexturesFromCube(); InitializeData(Game.GraphicsDevice); } #region Properties /// <summary> /// Gets or sets the position offset. /// </summary> /// <value> /// The position offset. /// </value> public Vector3 PositionOffset { get; set; } /// <summary> /// Gets or sets the position. /// </summary> /// <value> /// The position. /// </value> public Vector3 Position { get; set; } /// <summary> /// Gets or sets the cube. /// </summary> /// <value> /// The cube. /// </value> public TextureCube Cube { get; set; } /// <summary> /// Gets or sets the pixel array. /// </summary> /// <value> /// The pixel array. /// </value> public Color[] PixelArray { get; set; } /// <summary> /// Gets or sets the cube faces. /// </summary> /// <value> /// The cube faces. /// </value> public Texture2D[] CubeFaces { get; set; } /// <summary> /// Gets or sets the vertex buffer. /// </summary> /// <value> /// The vertex buffer. /// </value> public VertexBuffer VertexBuffer { get; set; } /// <summary> /// Gets or sets the index buffer. /// </summary> /// <value> /// The index buffer. /// </value> public IndexBuffer IndexBuffer { get; set; } /// <summary> /// Gets or sets the effect. /// </summary> /// <value> /// The effect. /// </value> public BasicEffect Effect { get; set; } #endregion protected override void LoadContent() { } public override void Update(GameTime gameTime) { var camera = Game.GetService<GraphicManager>().CurrentCamera; this.Position = camera.Position + PositionOffset; base.Update(gameTime); } public override void Draw(GameTime gameTime) { DrawOrder = int.MaxValue; var graphics = Effect.GraphicsDevice; graphics.DepthStencilState = new DepthStencilState() { DepthBufferEnable = false }; graphics.RasterizerState = new RasterizerState() { CullMode = CullMode.None }; graphics.BlendState = new BlendState(); graphics.SamplerStates[0] = SamplerState.AnisotropicClamp; graphics.SetVertexBuffer(VertexBuffer); graphics.Indices = IndexBuffer; Effect.Texture = CubeFaces[0]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 0, 2); Effect.Texture = CubeFaces[1]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 6, 2); Effect.Texture = CubeFaces[2]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 12, 2); Effect.Texture = CubeFaces[3]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 18, 2); Effect.Texture = CubeFaces[4]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 24, 2); Effect.Texture = CubeFaces[5]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 30, 2); base.Draw(gameTime); } #region Fields private List<VertexPositionNormalTexture> _vertices = new List<VertexPositionNormalTexture>(); private List<ushort> _indices = new List<ushort>(); #endregion #region Private methods private void InitializeData(GraphicsDevice graphicsDevice) { VertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormalTexture), _vertices.Count, BufferUsage.None); VertexBuffer.SetData<VertexPositionNormalTexture>(_vertices.ToArray()); // Create an index buffer, and copy our index data into it. IndexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), _indices.Count, BufferUsage.None); IndexBuffer.SetData<ushort>(_indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. Effect = new BasicEffect(graphicsDevice); Effect.TextureEnabled = true; Effect.EnableDefaultLighting(); } private void CreateGraphic(float size) { Vector3[] normals = { Vector3.Right, Vector3.Left, Vector3.Up, Vector3.Down, Vector3.Backward, Vector3.Forward, }; Vector2[] textureCoordinates = { Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, }; var index = 0; foreach (var normal in normals) { var side1 = new Vector3(normal.Z, normal.X, normal.Y); var side2 = Vector3.Cross(normal, side1); AddIndex(CurrentVertex + 0); AddIndex(CurrentVertex + 1); AddIndex(CurrentVertex + 2); AddIndex(CurrentVertex + 0); AddIndex(CurrentVertex + 2); AddIndex(CurrentVertex + 3); AddVertex((normal - side1 - side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal - side1 + side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal + side1 + side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal + side1 - side2) * size / 2, normal, textureCoordinates[index++]); } } protected void StripTexturesFromCube() { PixelArray = new Color[Cube.Size * Cube.Size]; for (int s = 0; s < CubeFaces.Length; s++) { CubeFaces[s] = new Texture2D(Game.GraphicsDevice, Cube.Size, Cube.Size, false, SurfaceFormat.Color); switch (s) { case 0: Cube.GetData<Color>(CubeMapFace.PositiveX, PixelArray); CubeFaces[s].SetData<Color>(PixelArray); break; case 1: Cube.GetData(CubeMapFace.NegativeX, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 2: Cube.GetData(CubeMapFace.PositiveY, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 3: Cube.GetData(CubeMapFace.NegativeY, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 4: Cube.GetData(CubeMapFace.PositiveZ, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 5: Cube.GetData(CubeMapFace.NegativeZ, PixelArray); CubeFaces[s].SetData(PixelArray); break; } } } protected void AddVertex(Vector3 position, Vector3 normal, Vector2 textureCoordinates) { _vertices.Add(new VertexPositionNormalTexture(position, normal, textureCoordinates)); } protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); _indices.Add((ushort)index); } protected int CurrentVertex { get { return _vertices.Count; } } #endregion } }

    Read the article

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • Building View Matrix in Direct3D11

    - by Balls
    Am I doing it right? I converted this. m_ViewMatrix = XMMatrixLookAtLH(XMLoadFloat3(&m_Position), lookAtVector, upVector); to this one. XMVECTOR vz = XMVector3Normalize( lookAtVector - XMLoadFloat3(&m_Position) ); XMVECTOR vx = XMVector3Normalize( XMVector3Cross( upVector, vz ) ); XMVECTOR vy = XMVector3Cross( vz, vx ); m_ViewMatrix.r[0] = vx; m_ViewMatrix.r[1] = vy; m_ViewMatrix.r[2] = vz; m_ViewMatrix.r[3] = XMLoadFloat3(&m_Position); m_ViewMatrix.r[0].m128_f32[3] = 0.0f; m_ViewMatrix.r[1].m128_f32[3] = 0.0f; m_ViewMatrix.r[2].m128_f32[3] = 0.0f; m_ViewMatrix.r[3].m128_f32[3] = 1.0f; m_ViewMatrix = XMMatrixInverse( &XMMatrixDeterminant(m_ViewMatrix), m_ViewMatrix ); Everything looks fine when I run it. Another question is, I saw on this site(http://webglfactory.blogspot.com/2011/06/how-to-create-view-matrix.html) that he subtracted lookat from position in his vector vz. I tried it but gave me wrong view matrix. Can anyone check my code. I'm studying linear algebra right now. Sucks my course doesn't have one. Thank you, Balls

    Read the article

  • Show path of a body of where it should go after linear impulse is applied

    - by Farooq Arshed
    I am making a game with Andengine and Box2D. I have a dynamic body and I apply linear impulse on the body to move it around when the user have touched the screen. Now I want to show the path where the body will go when the user have touched. If you have played Angry Birds or Basket Ball Shoot or any other which have projectile motion with a path shown you will get my point. I want to show the white dots which are shown in those games.

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >