Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 434/772 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • How to make my simple round sprite look right in XNA

    - by Joshua Perina
    Ok, I'm very new to graphics programming (but not new to coding). I'm trying to load a simple image in XNA which I can do fine. It is a simple round circle which I made in photoshop. The problem is the edges show up rough when I draw it on the screen even with the exact size. The anti-aliasing is missing. I'm sure I'm missing something very simple: GraphicsDevice.Clear(Color.Black); // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(circle, new Rectangle(10, 10, 10, 10), Color.White); spriteBatch.End(); Couldn't post picture because I'm a first time poster. But my smooth png circle has rough edges. So I found that if I added: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied); I can get a smooth image when the image is the same size as the original png. But if I want to scale that image up or down then the rough edges return. How do I get XNA to smoothly resize my simple round image to a smaller size without getting the rough edges?

    Read the article

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • Rain drops on screen

    - by user1075940
    I am trying to make simple rain drop effect on screen.Something like this http://fc00.deviantart.net/fs20/f/2007/302/5/6/Rain_drops_by_rockraikar.png My idea is to: Create small drop shaped normal textures,randomly put few on screen,apply texture perturbation and mix with current frame pixels. Here are my questions: -Does this idea even have sense?How professionals do this effect?Everything from text to code will be appreciated -How to pass pixels to shader of already rendered frame?

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • How can I solve this SAT direct corner intersection edge case?

    - by ssb
    I have a working SAT implementation, but I am running into a problem where direct collisions at a corner do not work for tiled surfaces. That is, it clips on the surface when going in a certain direction because it gets hung up on one of the tiles, and so, for example, if I walk across a floor while holding both down and left, the player will stop when meeting the next shape because the player will be colliding with the right side rather than with the top of the floor tile. This illustration shows what I mean: The top block will translate right first and then up. I have checked here and here which are helpful, but this does not address what I should do in a situation where I don't have a tile-based world. My usage of the term "tile" before isn't really accurate since what I'm doing here is manually placing square obstacles next to each other, not assigning them spots on a grid. What can I do to fix this?

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • Vector.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Vector.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • How to capture the screen in DirectX 9 to a raw bitmap in memory without using D3DXSaveSurfaceToFile

    - by cloudraven
    I know that in OpenGL I can do something like this glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, _width, _height, GL_RGB, GL_UNSIGNED_BYTE, _buffer ); And its pretty fast, I get the raw bitmap in _buffer. When I try to do this in DirectX. Assuming that I have a D3DDevice object I can do something like this if (SUCCEEDED(D3DDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackbuffer))) { HResult hr = D3DXSaveSurfaceToFileA(filename, D3DXIFF_BMP, pBackbuffer, NULL, NULL); But D3DXSaveSurfaceToFile is pretty slow, and I don't need to write the capture to disk anyway, so I was wondering if there was a faster way to do this

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • How does Minecraft render its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Better solution for boolean mixing?

    - by Ruben Nunez
    Sorry if this question has been asked in the past, but searching Google and here didn't yield relevant results, so here goes. I'm working on a fragment shader that implements both conditional/boolean diffuse and bump mapping (that is to say, you don't need a diffuse texture or a normals texture, and if they're not present, they're simply changed to default values). My current solution is to use a uniform float to say "mix amount". For example, computing the diffuse texel works as: // Compute diffuse amount scaled by vCol // If no texture is present (mDif = 0.0), then DiffuseTexel = vCol // kT[0] is the diffuse texture // vTex is the texture co-ordinates // mDif is the uniform float containing the mix amount (either 0.0 or 1.0) vec4 DiffuseTexel = vCol*mix(vec4(1.0), texture2D(kT[0], vTex), mDif); While that works great and all, I was wondering if there's a better way of doing this, as I will never have any use for in-between values for funky effects. I know that perhaps the best solution is to simply write separate shaders for mDif=0.0 and mDif=1.0, but I'd like a more elegant solution than splicing shaders before compiling or writing multiple shader files and keeping each one updated. Any ideas are greatly appreciated. =)

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Locomotion-system with irregular IK

    - by htaunay
    Im having some trouble with locomtions (Unity3D asset) IK feet placement. I wouldn't call it "very bad", but it definitely isn't as smooth as the Locomotion System Examples. The strangest behavior (that is probably linked to the problem) are the rendered foot markers that "guess" where the characters next step will be. In the demo, they are smooth and stable. However, in my project, they keep flickering, as if Locomotion changed its "guess" every frame, and sometimes, the automatic defined step is too close to the previous step, or sometimes, too distant, creating a very irregular pattern. The configuration is (apparently)Identical to the human example in the demo, so I guessing the problem is my model and/or animation. Problem is, I can't figure out was it is =S Has anyone experienced the same problem? I uploaded a video of the bug to help interpreting the issue (excuse the HORRIBLE quality, I was in a hurry).

    Read the article

  • Comparison between a value with static type Array and a possibly unrelated type Class

    - by Kaoru
    I got this error: Comparison between a value with static type Array and a possibly unrelated type Class. After i modify the class to many classes (before that, everything is on 1 class (all of the functions)), but after i move everything to many classes (all the functions is not on 1 class), that error appear. How to solve this? I am using AS3 and as3isolib Library. Here is the code after i modify the function: if (Constant.dude.y < Constant._numY) { if (Constant.dude.sprites != marioBackClass) { Constant.dude.sprites = [marioBackClass]; Constant.dudeDir = "Up"; } } Here is the code before i change the function to many classes: if (dude.y < ._numY) { if (dude.sprites.toString() != marioBackClass.toString()) { dude.sprites = [marioBackClass]; dudeDir = "Up"; } }

    Read the article

  • How to determine if a 3D voxel-based room is sealed, efficiently

    - by NigelMan1010
    I've been having some issues with efficiently determining if large rooms are sealed in a voxel-based 3D rooms. I'm at a point where I have tried my hardest to solve the problem without asking for help, but not tried enough to give up, so I'm asking for help. To clarify, sealed being that there are no holes in the room. There are oxygen sealers, which check if the room is sealed, and seal depending on the oxygen input level. Right now, this is how I'm doing it: Starting at the block above the sealer tile (the vent is on the sealer's top face), recursively loop through in all 6 adjacent directions If the adjacent tile is a full, non-vacuum tile, continue through the loop If the adjacent tile is not full, or is a vacuum tile, check if it's adjacent blocks are, recursively. Each time a tile is checked, decrement a counter If the count hits zero, if the last block is adjacent to a vacuum tile, return that the area is unsealed If the count hits zero and the last block is not a vacuum tile, or the recursive loop ends (no vacuum tiles left) before the counter is zero, the area is sealed If the area is not sealed, run the loop again with some changes: Checking adjacent blocks for "breathable air" tile instead of a vacuum tile Instead of using a decrementing counter, continue until no adjacent "breathable air" tiles are found. Once loop is finished, set each checked block to a vacuum tile. Here's the code I'm using: http://pastebin.com/NimyKncC The problem: I'm running this check every 3 seconds, sometimes a sealer will have to loop through hundreds of blocks, and a large world with many oxygen sealers, these multiple recursive loops every few seconds can be very hard on the CPU. I was wondering if anyone with more experience with optimization can give me a hand, or at least point me in the right direction. Thanks a bunch.

    Read the article

  • GLSL, all in one or many shader programs?

    - by stjepano
    I am doing some 3D demos using OpenGL and I noticed that GLSL is somewhat "limited" (or is it just me?). Anyway I have many different types of materials. Some materials have ambient and diffuse color, some materials have ambient occlusion map, some have specular map and bump map etc. Is it better to support everything in one vertex/fragment shader pair or is it better to create many vertex/fragment shaders and select them based on currently selected material? What is the usual shader strategy in OpenGL or D3D?

    Read the article

  • swf file not playing aftre being published

    - by rsquare
    i am trying to run the 'connector' example that comes bundled with smartfoxserver2x downloads..there it connects to the server and loads the correct configuration file. when i run it in adobe flash professional 5,it runs correctly and connects to the server but after being published as SWF movie,it doesnt work.it loads the configuration file but cant connect and gives error connection failure..ERROR 2048 this is the example i am talking about. http://docs2x.smartfoxserver.com/ExamplesFlash/connector

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Where have the Direct3D 11 tutorials on MSDN have gone?

    - by Cam Jackson
    I've had this tutorial bookmarked for ages. I've just decided to give DX11 a real go, so I've gone through that tutorial, but I can't find where the next one in the series is! There are no links from that page to either the next in the series, or back up to the table of contents that lists all of the tutorials. These are just companion tutorials to the samples that come with the SDK, but I find them very helpful. Searching MSDN from google and the MSDN Bing search box has turned up nothing, it's like they've removed all links to these tutorials, but the pages are still there if you have the URLs. Unfortunately, MSDN URLs are akin to youtube URLs, so I can't just guess the URL of the next tutorial. Anyone have any idea what happened to these tutorials, or how I can find the others?

    Read the article

  • Can DrawIndexedPrimitives() be used for drawing a loaded model mesh-wise?

    - by Afzal
    I am using DrawIndexedPrimitives() for drawing a loaded 3D model by drawing each mesh part, but this process makes my application very slow. This is perhaps because of a very large number of vertex/index buffer data created in video memory. That is why I am looking for a way to use the same method for each model mesh instead. The problem is that I don't know how I will set the textures of that mesh. Can anyone offer me some guidance?

    Read the article

  • Cutting out smaller rectangles from a larger rectangle

    - by Mauro Destro
    The world is initially a rectangle. The player can move on the world border and then "cut" the world via orthogonal paths (not oblique). When the player reaches the border again I have a list of path segments they just made. I'm trying to calculate and compare the two areas created by the path cut and select the smaller one to remove it from world. After the first iteration, the world is no longer a rectangle and player must move on border of this new shape. How can I do this? Is it possible to have a non rectangular path? How can I move the player character only on path? EDIT Here you see an example of what I'm trying to achieve: Initial screen layout. Character moves inside the world and than reaches the border again. Segment of the border present in the smaller area is deleted and last path becomes part of the world border. Character moves again inside the world. Segments of border present in the smaller area are deleted etc.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >