Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 567/1505 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • How do I adjust the origin of rotation for a group of sprites?

    - by Jon
    I am currently grouping sprites together, then applying a rotation transformation on draw: private void UpdateMatrix(ref Vector2 origin, float radians) { Vector3 matrixorigin = new Vector3(origin, 0); _rotationMatrix = Matrix.CreateTranslation(-matrixorigin) * Matrix.CreateRotationZ(radians) * Matrix.CreateTranslation(matrixorigin); } Where the origin is the Centermost point of my group of sprites. I apply this transformation to each sprite in the group. My problem is that when I adjust the point of origin, my entire sprite group will re-position itself on screen. How could I differentiate the point of rotation used in the transformation, from the position of the sprite group? Is there a better way of creating this transformation matrix? EDIT Here is the relevant part of the Draw() function: Matrix allTransforms = _rotationMatrix * camera.GetTransformation(); spriteBatch.Begin(SpriteSortMode.BackToFront, null, null, null, null, null, allTransforms); for (int i = 0; i < _map.AllParts.Count; i++) { for (int j = 0; j < _map.AllParts[0].Count; j++) { spriteBatch.Draw(_map.AllParts[i][j].Texture, _map.AllParts[i][j].Position, null, Color.White, 0, _map.AllParts[i][j].Origin, 1.0f, SpriteEffects.None, 0f); } } This all works fine, again, the problem is that when a rotation is set and the point of origin is changed, the sprite group's position is offset on screen. I am trying to figure out a way to adjust the point of origin without causing a shift in position. EDIT 2 At this point, I'm looking for workarounds as this is not working. Does anyone know of a better way to rotate a group of sprites in XNA? I need a method that will allow me to modify the point of rotation (origin) without affecting the position of the sprite group on screen.

    Read the article

  • Large resolution differences

    - by Robin Betka
    I want to develop a game on multiple devices such as PC, Android or IOS. Want it to be in 1080p, but that means a massive scale down for the smartphones. I know how to do that, just render everything on a 1080p rendertarget and then render it on the screen smaller. But what should I do so that the scalling down doesn't look bad and blury? I can't do it vector based or anything because the sprites simply need a specific size. Should I make the sprites power of two size to get some nice mipmapping? And which other settings can I do? Or should I rather go with a lower resolution but then having a little bit worse look PC version? The performance seems not to be a problem for me, so would be sad not using 1080p because of other problems.

    Read the article

  • How does Starcraft 2 load it's metadata?

    - by chobok
    Lets say you are playing Starcraft 2 melee map. The game loads the map. Melee maps have the following dependencies: Liberty (Mod) Liberty Multi (Mod) I think the game engine will load the data from Liberty (Mod) first, then from Liberty Multi (Mod). For data that exists in both dependencies, the engine will use the one from Liberty Multi (Mod). Is this correct? Liberty Multi (Mod) is updated with each patch of Starcraft 2. Does the game engine load just the latest version of Liberty Multi (Mod)? or Does the game engine load all the versions and overwrite duplicate data with the latest version?

    Read the article

  • I need to move an entity to the mouse location after i rightclick

    - by I.Hristov
    Well I've read the related questions-answers but still cant find a way to move my champion to the mouse position after a right-button mouse-click. I use this code at the top: float speed = (float)1/3; And this is in my void Update: //check if right mouse button is clicked if (mouse.RightButton == ButtonState.Released && previousButtonState == ButtonState.Pressed) { // gets the position of the mouse in mousePosition mousePosition = new Vector2(mouse.X, mouse.Y); //gets the current position of champion (the drawRectangle) currentChampionPosition = new Vector2(drawRectangle.X, drawRectangle.Y); // move champion to mouse position: //handles the case when the mouse position is really close to current position if (Math.Abs(currentChampionPosition.X - mousePosition.X) <= speed && Math.Abs(currentChampionPosition.Y - mousePosition.Y) <= speed) { drawRectangle.X = (int)mousePosition.X; drawRectangle.Y = (int)mousePosition.Y; } else if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)((mousePosition.X - currentChampionPosition.X) * speed); drawRectangle.Y += (int)((mousePosition.Y - currentChampionPosition.Y) * speed); } } previousButtonState = mouse.RightButton; What that code does at the moment is on a click it brings the sprite 1/3 of the distance to the mouse but only once. How do I make it move consistently all the time? It seems I am not updating the sprite at all. EDIT I added the Vector2 as Nick said and with speed changed to 50 it should be OK. I tried it with if ButtonState.Pressed and it works while pressing the button. Thanks. However I wanted it to start moving when single mouse clicked. It should be moving until reaches the mousePosition. The Edit of Nick's post says to create another Vector2, But I already have the one called mousePosition. Not sure how to use another one. //gets a Vector2 direction to move *by Nick Wilson Vector2 direction = mousePosition - currentChampionPosition; //make the direction vector a unit vector direction.Normalize(); //multiply with speed (number of pixels) direction *= speed; // move champion to mouse position if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)(direction.X); drawRectangle.Y += (int)(direction.Y); } } previousButtonState = mouse.RightButton;

    Read the article

  • 2D Grid Map Connectivity Check (avoiding stack overflow)

    - by SombreErmine
    I am trying to create a routine in C++ that will run before a more expensive A* algorithm that checks to see if two nodes on a 2D grid map are connected or not. What I need to know is a good way to accomplish this sequentially rather than recursively to avoid overflowing the stack. What I've Done Already I've implemented this with ease using a recursive algorithm; however, depending upon different situations it will generate a stack overflow. Upon researching this, I've come to the conclusion that it is overflowing the stack because of too many recursive function calls. I am sure that my recursion does not enter an infinite loop. I generate connected sets at the beginning of the level, and then I use those connected sets to determine connectivity on the fly later. Basically, the generating algorithm starts from left-to-right top-to-bottom. It skips wall nodes and marks them as visited. Whenever it reaches a walkable node, it recursively checks in all four cardinal directions for connected walkable nodes. Every node that gets checked is marked as visited so they aren't handled twice. After checking a node, it is added to either a walls set, a doors set, or one of multiple walkable nodes sets. Once it fills that area, it continues the original ltr ttb loop skipping already-visited nodes. I've also looked into flood-fill algorithms, but I can't make sense of the sequential algorithms and how to adapt them. Can anyone suggest a better way to accomplish this without causing a stack overflow? The only way I can think of is to do the left-to-right top-to-bottom loop generating connected sets on a row basis. Then check the previous row to see if any of the connected sets are connected and then join the sets that are. I haven't decided on the best data structures to use for that though. I also just thought about having the connected sets pre-generated outside the game, but I wouldn't know where to start with creating a tool for that. Any help is appreciated. Thanks!

    Read the article

  • JMonkeyEngine display a spatial in a Nifty GUI interface

    - by Yanick Rochon
    I want to display a spatial (or the rendering of a spatial/scene) in my HUD interface. I'm really not sure how to go with this. I have search the documentation, but all the queries I search yields no result, and all I could find about images is that one can specify one with the setBackgroundImage method in the builder and setImage from the ImageRenderer class. The latter takes a String or a NiftyImage, but I'm not sure how to create one without loading an image file. Any help to understand this (if even possible) is appreciated. Thanks!

    Read the article

  • Group Matchmaking

    - by Simon Kérouack
    Consider different groups(1 or more players) queuing together, we want to make 2 opposing teams containing each the same amount of players while keeping the groups together. At the same time we want to make both teams' average ranking as close as possible. Now also consider we have as a working set the subset of groups currently queuing within a given ranking range. For an example, let's say we have the following groups, ordered by queuing time: Id, playerCount, totalRank, avgRank 0, 3, 126, 42 1, 2, 60, 30 2, 1, 25, 25 3, 2, 80, 40 4, 1, 40, 40 5, 1, 20, 20 6, 3, 150, 50 for this specific subset, the expected output should ideally be: team1: 0, 1 (total: 186) team2: 2, 5, 6 (total: 195) up to now the solution I have been using is to balance out each team by making each team pick the group with highest ranking within the subset turn by turn. The team who picks is the one with the currently lowest average rank unless one is already full. If one team is already full the other team tries to complete itself with groups that would make the rank gap as small as possible. This solution turns out to have issues with frequent edge cases and I'm looking for a better solution, or some fine-tuning that could be made. In most cases, players seems to want teams of 5 people and queue in group of 2. Our average subset when 2 teams of 5 are chosen is made of about 14 players if that may be of any help.

    Read the article

  • C# and Unity - Learning to Develop a game by developing the game I want to develop

    - by 97s
    So I am pretty new to C#, I have some python and javascript experience, but nothing substantial. I have read a lot about C# and Unity and I know they are the tools I want to use. My question is: Should I be reading books about C# or should I just start hacking in unity and piecing the game together part by part? Right now I am going through the book, HeadFirst C#, and it is very good, but I taught myself web design and javascript by just creating and hacking until I got the results I wanted then looked at other code to see ways they did it and improved my code. The issue is that with the browser I got immediate results and it was all under one roof, where developing games is a completely different monster. I am just wondering if my time would be better spent buying a book that uses C# to teach you unity, and doing that instead, or if the time spent in HeadFirst book is going to be valuable. Thanks a ton, I am having difficulties using my time, and I just want to maximize it as I don't have a lot of free time. Edit: Hopefully this isn't to broad? If it is, I will delete and go elsewhere just let me know. Thanks.

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • Platform jumping problems with AABB collisions

    - by Vee
    See the diagram first: When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis. Considering the "jumping moving left" example: If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air). If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis. How can I solve this problem? Source code: public void Update() { Position += Velocity; Velocity += World.Gravity; List<SSSPBody> toCheck = World.SpatialHash.GetNearbyItems(this); for (int i = 0; i < toCheck.Count; i++) { SSSPBody body = toCheck[i]; body.Test.Color = Color.White; if (body != this && body.Static) { float left = (body.CornerMin.X - CornerMax.X); float right = (body.CornerMax.X - CornerMin.X); float top = (body.CornerMin.Y - CornerMax.Y); float bottom = (body.CornerMax.Y - CornerMin.Y); if (SSSPUtils.AABBIsOverlapping(this, body)) { body.Test.Color = Color.Yellow; Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom); Position += overlapVector; } if (SSSPUtils.AABBIsCollidingTop(this, body)) { if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) && (Position.Y + Height/2f == body.Position.Y - body.Height/2f)) { body.Test.Color = Color.Red; Velocity = new Vector2(Velocity.X, 0); } } } } } public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2) { if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; if(mBody1.CornerMax.Y == mBody2.CornerMin.Y) return true; return false; } public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom) { Vector2 result = new Vector2(0, 0); if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0)) return result; if (Math.Abs(mLeft) < mRight) result.X = mLeft; else result.X = mRight; if (Math.Abs(mTop) < mBottom) result.Y = mTop; else result.Y = mBottom; if (Math.Abs(result.X) < Math.Abs(result.Y)) result.Y = 0; else result.X = 0; return result; }

    Read the article

  • Rotate canvas along its center based on user touch - Android

    - by Ganapathy
    I want to rotate the canvas circularly on its center axis based on user touch. i want to rotate based on center but its rotating based on top left corner . so i am able to see only 1/4 for rotation of image. any idea.. Like a old phone dialer . I have tried like as follows onDraw(Canvas canvas){ canvas.save(); // do my rotation canvas.rotate(rotation,0,0); canvas.drawBitmap( ((BitmapDrawable)d).getBitmap(),0,0,p ); canvas.restore(); } @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX(); float y = e.getY(); updateRotation(x,y); mPreviousX = x; mPreviousY = y; invalidate(); } private void updateRotation(float x, float y) { double r = Math.atan2(x - centerX, centerY - y); rotation = (int) Math.toDegrees(r); }

    Read the article

  • Pixel Collision - Detecting corners

    - by Milkboat
    How would I go about detecting the corners of a texture when I use pixel collision detection? I read about corner collision with rectangles, but I am unsure how to adapt it to my situation. Right now my map is tile based and I do rectangular collision until the player is intersecting with a blocked tile, then I switch to pixel collision. The effect I would like to achieve is when the player hits the corner of an object to push him around the side so he doesn't just hit the edge and stop. Any ideas?

    Read the article

  • Method for spawning enemies according to player score and game time

    - by Sun
    I'm making a top-down shooter and want to scale the difficulty of the game according to what the score is and how much time has Passed. Along with this, I want to spawn enemies in different patterns and increase the intervals at which these enemies are shown. I'm going for a similar effect to Geometry wars. However, I can think of a to do this other than have multiple if-else statments, e.g. : if (score > 1000) { //spawn x amount if enemies } else if (score > 10000) { //spawn x amount of enemy type 1 & 2 } else if (score > 15000) { //spawn x amount of enemy type 1 & 2 & 3 } else if (score > 25000) { //spawn x amount of enemy type 1 & 2 & 3 //create patterns with enemies } ...etc What would be a better method of spawning enemies as I have described?

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • Any open source editor to make video games online without programming knowledge?

    - by chelder
    With Scratch we can create video games online, from its web platform, and publish them on the same web. I could download its source code and use it, as many others already did (see Scratch modifications). Unfortunately, we need programming knowledge to use it. Actually, Scratch is mainly for teaching kids to code. I also found editors like Construct 2, GameSalad Creator and many others (just type on Google: create a video game without programming). With those editors we can create video games without coding. Unfortunately they are neither open source nor web platform. They need to be installed on Windows or Mac. Do you know some editor like Construct 2 or GameSalad Creator but open source and executable from a web server? Maybe some HTML5 game engine can do it?

    Read the article

  • Animation API vs frame animation

    - by Max
    I'm pretty far down the road in my game right now, closing in on the end. And I'm adding little tweaks here and there. I used custom frame animation of a single image with many versions of my sprite on it, and controlled which part of the image to show using rectangles. But I'm starting to think that maybe I should've used the Animation API that comes with android instead. Will this effect my performance in a negative way? Can I still use rectangles to draw my bitmap? Could I add effects from the Animation API to my current frame-controlled animation? like the fadeout-effect etc? this would mean I wont have to change my current code. I want some of my animations to fade out, and just noticed that using the Animation API makes things alot easier. But needless to say, I would prefer not having to change all my animation-code. I'm bad at explaining, so Ill show a bit of how I do my animation: private static final int BMP_ROWS = 1; //I use top-view so only need my sprite to have 1 direction private static final int BMP_COLUMNS = 3; public void update(GameControls controls) { if (sprite.isMoving) { currentFrame = ++currentFrame % BMP_COLUMNS; } else { this.setFrame(1); } } public void draw(Canvas canvas, int x, int y, float angle) { this.x=x; this.y=y; canvas.save(); canvas.rotate(angle , x + width / 2, y + height / 2); int srcX = currentFrame * width; int srcY = 0 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); canvas.restore(); }

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

  • Light shaped like a line

    - by Michael
    I am trying to figure out how line-shaped lights fit into the standard point light/spotlight/directional light scheme. The way I see it, there are two options: Seed the line with regular point lights and just deal with the artifacts. Easy, but seems wasteful. Make the line some kind of emissive material and apply a bloom effect. Sounds like it could work, but I can't test it in my engine yet. Is there a standard way to do this? Or for non-point lights in general?

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • XNA 4.0 SpriteFont not displaying all Characters

    - by Iain Brown
    Am looking for a little help and trying to use SpriteFont in my XNA 4.0 game but the problem is am displaying to string "This is a test" but all that's displayed on the screen is "This is st" so the "a te" are missing from the screen. The space is there for the characters but the letters are not. The code am using is: spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spriteBatch.DrawString(font,"this is a test",new Vector2(692,372),Color.White); spriteBatch.Draw(texture,new Rectangle(0,0,100,100),Color.White); spriteBatch.End(); Any help with this would be great!

    Read the article

  • How do I implement smooth movement in a Box2D platform game?

    - by Romeo
    I have implemented a character in JBox2D which moves with the help of a wheel rotating at the bottom of it. The movement is the best result I've had 'till now but it's a little glitchy when the character stands on the edge. So I am thinking should I use five smaller wheels instead of a big wheel. The wheel/wheels will not be visible in the finished product, now they are drawn for debugging. Here is a video. Is there a better way to do this using JBox2D?

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >