Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 641/1414 | < Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >

  • Restricted pathfinding Area

    - by SubZeron
    So i'm triying to create a little "XCOM : Enemy Unknown" like game ,and using the Aron Granberg's Pathfinding-Tool (free version) to handle the "click to move part. i want to add a little trap system where the hero get stuck inside an area, so he will have only the possibility to move inside this trapped area, so far everything is fine however when i click outside the trapped area, the hero try to reach the destination even though the wall will prevents him from reaching it. so my question is, is there any way to restrict the area where the pathfinding system work to the trapped area dynamically. and wich Graph Type is recommended to use in this situation or this kind of Games (Grid Graph/Navmesh Graph/Point Graph). Thank you. image link for explanation : https://dl.dropbox.com/u/77993668/exemple.jpg

    Read the article

  • Blender 2.6: How to Merge the Pros of Meshes and Surfaces

    - by fridojet
    there are two interesting kinds of objects: Meshes and Surfaces. Each of them offers very cool features. Object Type Specific Features Nice Features of Surfaces: (for example) They're as scalable as vector graphics (really nice!) You can build winding things real simply. Nice Features of Meshes: (for example) You can build organic things really good using the Sculpt Mode and a graphic tablet. You can use some special things like Physics. My Question There are things for which Surfaces are better and things for which Meshes are better. But how can I use both the best features of Surfaces and the best features of Meshes on one object at once? For example: How can I use Physics (like on Meshes) on lossless scalable objects (like Surfaces)? Thanks.

    Read the article

  • How can I keep the correct alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles.

    Read the article

  • Cross-platform builds with OGRE3D via CMake. Any tips?

    - by frarees
    I've been trying to compile a simple project for both OSX and Windows platforms, using OGRE3D, but I've got some problems on the way. I'm using CMake to create my platform specific project files (VS solution & Xcode project). Some problems I found are: OGRE3D source is distributed in 2 flavors, Windows sources and UNIX/OSX sources. In OSX, compiling dependencies (freetype, FreeImage and specially OIS) is such a pain. I don't know how to handle precompiled dependencies (they exist for both Win & Mac). May sound like a noob question, but I would appreciate some tips on this. Resources, forum posts, anything. There exists any "cross-platform base project for OGRE3D" on the net? Would be really helpful if someone who already managed to do this can bring some light. Btw, I'm not basing the project on OGRE3D, it's just that is the biggest library I'm probably using, so I depend a lot on it. Thanks in advantage!

    Read the article

  • The input doesn't recognize that I release the key?

    - by joapet99
    I'm creating a window (JOptionPane), in response to a collision. However, if the player is holding a key down when the window pops up, the input doesn't trigger a key release when the key is released. I don't think you can just check it with a isRelease function in the input, since the input is kind of corrupt. Can you help me? The way I check if the key is down: if(input.isKeyDown(Input.KEY_A)&& TestLevel.isFighting == false){ if(owner.canMoveLeft){ position.x -= speed * delta; } } I am not handling the key release by myself, but if I check if the key is down it should work. But it doesn't.

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • Anyone can suggest some Game Frameworks for GNU/Linux? [closed]

    - by dysoco
    So I've been developing a little bit with XNA + C# in Windows, not really much: just some 2D stuff, but I've found that XNA is a really good framework. I'm a GNU/Linux user, and I'm definitely migrating my desktop to Gentoo Linux (I've been using Arch in my laptop for a while now). But, of course, I need a C# + XNA alternative... I'm not really an expert in any language, so I can really pick up anything (except, maybe, Functional ones), I prefer C-Like languages like Java or Ruby, I tried Python but found the Whitespace syntax confusing. I would like to hear some of you'r suggestions, I'm not asking for "the best", but for "some suggestions", so I think this is objective enough. Probably you're going to suggest C++ + SDL, but I would prefer something more "High Level" like XNA, but I'm open to discuss anything. So... any ideas ? Note: I think this questions meets the guidelines for this site, if it doesn't: please not only downvote this question, but comment on what can I do to improve it. Thanks. PS: 2D Games, not 3D

    Read the article

  • Is there a repository of game logic algorithms?

    - by New2This
    I'm writing my first 2D game, and I'm writing some tracking logic for the computer enemies. Basic follow-the-player tracking was easy, but ineffectual. Too easy to escape. So I'm trying to implement some more sophisticated flanking and other tactics, and (as expected) it's pretty tricky. This is a topic I know nothing about. I'm going to keep trying, but it'd be awesome to have some examples or tips to work off of. Is there any place that has a decent set of pseudocode AI algorithms, or tips or advice on the subject, e.g. for 2D tracking?

    Read the article

  • Creating movement path displays in a top-down 2d RTS

    - by nihohit
    My game is a top-down 2d RTS coded in C# using SFML's libraries. I want that during unit selection, a unit will display it's movement path on the map. Currently, after the path is computed as a list of directions ({left, up,down, down, down, left}, as an example), it's sent to the graphical component to create it's UI equivalent, and here I'm having some problems. current, these I've checked three ways to do it: compute the size of the image (in the example above it'll be a 3*2 rectangle) and create an invisible rectangle, and then go over the directions list and mark each spot with a visible point, so as to get a continous line. This system is slightly problematic because of the amount of large images that I need to save, but mostly because I have a lot of fine detail onscreen, and a continous line obstructs the view. again, compute the size of the image, but now create several (let's say 4) invisible images of that size, and then instead of a single continous line I'll switch between the four images, in each will appear only a fourth of the spots, in a way which creates a path animation. This is nicer on the eye, but here the memory demands, and the amount of time needed to compute each such image-loop is significant. Just create a list of single markers, each on a different spot on the path. This is very quick & easy on memory, but too sparse. Is there a simple or resource-light system to create path-animations?

    Read the article

  • How to display a projectile trajectory in c++? [on hold]

    - by sana
    I am trying to make a game of Gorillas in c++ whose specification is somewhat like....... is : "In this game both players should select their position on a level scaled ground. Scale of the ground should be from 0 to 20 divisions, each division corresponding to 10 meters. Each player will enter an angle and initial velocity (limits of both should be defined) and the player will hurl a stone with this velocity at given angle. Stone will make a projectile and if it hits the other player then shooting player wins. A random effect of air should also be incorporated. Air will support one player and resist other. Velocity of air should be generated randomly, within some limits, and subtracted or added in the horizontal velocity of the stone. An arrow of suitable length shall represent air direction and velocity. The player who hits first wins." How do I display the trajectory of the stone????

    Read the article

  • How do I properly implement zooming in my game?

    - by Rudy_TM
    I'm trying to implement a zoom feature but I have a problem. I am zooming in and out a camera with a pinch gesture, I update the camera each time in the render, but my sprites keep their original position and don't change with the zoom in or zoom out. The Libraries are from libgdx. What am I missing? private void zoomIn() { ((OrthographicCamera)this.stage.getCamera()).zoom += .01; } public boolean pinch(Vector2 arg0, Vector2 arg1, Vector2 arg2, Vector2 arg3) { // TODO Auto-generated method stub zoomIn(); return false; } public void render(float arg0) { this.gl.glClear(GL10.GL_DEPTH_BUFFER_BIT | GL10.GL_COLOR_BUFFER_BIT); ((OrthographicCamera)this.stage.getCamera()).update(); this.stage.draw(); } public boolean touchDown(int arg0, int arg1, int arg2) { this.stage.toStageCoordinates(arg0, arg1, point); Actor actor = this.stage.hit(point.x, point.y); if(actor instanceof Group) { ((LevelSelect)((Group) actor).getActors().get(0)).touched(); } return true; } Zoom In Zoom Out

    Read the article

  • How do I save files with libgdx so that users can't read them?

    - by Rudy_TM
    Writing my game in libgdx, I arrived at the point when I need to save the player stats and the info of the levels. However, in libgdx it's not allowed to write the file inside folder of the application, only external (on the SD) is allowed. The point is that I don't want the file to be seen by anyone, or if they can see it, how can I convert it to a binary file so it's not human readable? I just want to hide the file.

    Read the article

  • State machine interpreters

    - by saadtaame
    I wrote my own state machine tool in C and at this point I'm faced with two choices for specifying state machines. Crafting a little language and writing a interpreter. Writing a compiler for that language. I know the advantages/disadvantages of each. I'd like to know what choices game programmers have made for their games. If you've used a state machine in your game in any form, I'd be interested in knowing how you did it.

    Read the article

  • Inventory Item Exist checker

    - by Annalyne
    I have a question regarding declaring my inventory. I made it a string named inventory, with a constant number as its max value. The thing is, I want the user to use an item if he / she gains an item. The problem is, I do not know what syntax should I use to determine if the user has an item and use that item. Here's my code I just started: so declaring the inventory: const int MAX_ITEMS = 15; string game_inventory [MAX_ITEMS]; int itemnum = 0; I have some items like potion, antidote, gems and others. I use the: game_inventory[itemnum++] = "Potion" to place items in my inventory. If I want to use the potion, IF I HAVE one, how can i make a function to check whether I have a potion or anything and use it?

    Read the article

  • Doing imagemagick like stuff in Unity (using a mask to edit a texture)

    - by Codejoy
    There is this tutorial in imagemagick http://www.imagemagick.org/Usage/masking/#masks I was wondering if there was some way to mimic the behavior (like cutting the image up based on a black image mask that turns image parts transparent... ) and then trim that image in game... trying to hack around with the webcam feature and reproduce some of the imagemagick opencv stuff in it in Unity but I am saddly unequipped with masks, shaders etc in unity skill/knowledge. Not even sure where to start.

    Read the article

  • ideas on multiplayer games which include lots of collaboration?

    - by user494461
    For my master thesis I wanted to design a small multiplayer game which includes more than one player(maybe 5), has 2-3 players collaborating at a time to achieve some task. The most important thing is realistic simulation of the movable objects in the scene which more than one player should simultaneously interact with. I would also prefer large virtual environments (VEs) like mmogs where groups of players are interacting in different areas of the VE. tasks for players should include 2-3 players touching movable objects at same time. e,g, a very basic task can be users lifting a cube together and pushing it through a hole I am not new to designing virtual environments with openGL, but have never designed games before and rarely play other than few of my favorite ones like Fifa. I wanted some ideas on what kind of games should I look at which should help me with ideas for my tasks for the users to gain points and win in games? any current indie games which might inspire me?

    Read the article

  • CLR Profiler Allocated Bytes and XNA ContentManager

    - by Vackup
    I've been fighting with XNA ContentManager and memory allocations for some weeks because I'm trying to port my game from XNA (Windows) to ExEn / Monotouch (iphone). The problem is that after playing a few levels, my game exits unexpectedly on a real iPhone device (not simulator). Profiling memory usage on Windows with CLRProfile, I found some useful stuff but I also found something I dont understand. If I use 2 ContentManagers (1 for shared assets and 1 for level assets), when profiling, "Allocated Bytes" grows and grows after level through level but Memory consumption measured by Windows Task Manager stays constant (down when I unload the content manager and up again when I load content). Obviously, I contentManager.Unload() when level ends. After a few levels my game exits unexpectedly on an iPhone device. If I use 1 content manager, "CRLProfiler Allocated Bytes" stays constant on Windows and on the iPhone; I can play the game normally and it doesnt exit unexpectedly. I use the same assets level through level. It seems like in ios (iPhone) when loading and unloading the same assets, it allocates memory and consumes all device memory, so the ios kill it. Can anybody explain me how this really works? I've read quite a bit, but I still don't understand what's going on.

    Read the article

  • forward rendering and multiple shadow maps

    - by Irbis
    I have two light sources on my scene. I created two fbo's which store depth textures for these lights. A render loop looks like this: bind fbo1 save depth values for first light unbind fbo1 bind fbo2 save depth values for second light unbind fbo2 enable additive blending bind first depth texture render scene bind second depth texture render scene disable additive blending For one light source the program works fine. For many light sources I use an additive blending to acumulate lighting results but then some objects become transparent (for example when an object which is further away from the camera is drawn before an object which is closer to the camera). How to resolve that problem ? How should I accumulate lighting effects for many light sources (many shadow maps) ? P.S. I use OpenGL/GLSL 3.3+

    Read the article

  • Movement on the X an Z axis are combined?

    - by Magicaxis
    This is probably a stupid question, but I'm trying to simply move a 3D object up, down, left, and right (Not forward or backward). The Y axis works fine, but when I increment the object's X position, the object moves BOTH right and backwards! when I decrement X, left and forwards! setPosition(getPosition().X + 2/*times deltatime*/, getPosition().Y, getPosition().Z); I was astonished that XNA doesnt have its own setPosition function, so I made a parent class for all objects with a setPosition and Draw function. Setposition simply edits a variable "mPosition" and passes it to the common draw function: // Copy any parent transforms. Matrix[] transforms = new Matrix[block.Bones.Count]; block.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in block.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(mOrientation.Y)) * Matrix.CreateTranslation(mPosition); effect.View = game1.getView(); effect.Projection = game1.getProjection(); } // Draw the mesh, using the effects set above. mesh.Draw(); } I tried to work it out by attempting to increment and decrement the Z axis, but nothing happens?! So using the X axis changes the objects x and z axis', but changing the Z does nothing. Great. So how do I seperate the X and Z axis movement?

    Read the article

  • Achieve anisotropic filtering

    - by fedab
    I want to set anisotropic filtering to my scene. I use SharpDX (DirectX 11) and C#. How do i set up anisotropic filtering in my shader? Currently i try that in the shader: Texture2D tex; sampler textureSampler = sampler_state { Texture = (tex); MipFilter = Anisotropic; MagFilter = Anisotropic; MinFilter = Anisotropic; MaxAnisotropy = 16; }; float4 PShader(float4 position : SV_POSITION, float4 color:COLOR, float2 tex0 : TEXCOORD0) : SV_TARGET { float4 textureColor; textureColor = tex.Sample(textureSampler, tex0) * color; return textureColor; } I get my object, textured, but it is not filtered anisotropic. I can write everything in the Parameters, even invalid things and i don't get any errors. The result is the same, objects without applied anisotropic filtering. Do i have to set that in the shader? Can i do that also with SamplerState? I tested that but i didn't get a result too. Some steps what i have to set would be helpful.

    Read the article

  • OpenGL : sluggish performance in extracting texture from GPU

    - by Cyan
    I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Well, anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit : by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples : looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60-80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering + saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it.

    Read the article

  • How to configure Bullet for LookAt?

    - by AllCoder
    I'm having problems positioning Bullet objects. I am doing: ToolVec3 origin = ToolVec3( obj_posx, obj_posy, obj_posz ); ToolVec3 vmod = ToolVec3( object_sizex / 2.0f, object_sizey / 2.0f, object_sizez / 2.0f ); btTransform shapeTransform = btTransform::getIdentity(); shapeTransform.setOrigin( btVector3(origin.x+vmod.x, origin.y+vmod.y, origin.z+vmod.z) ); btDefaultMotionState* myMotionState = new btDefaultMotionState(shapeTransform); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,m_collisionShapes[2],localInertia); btRigidBody* body = new btRigidBody(rbInfo); I then do: btCollisionObject* colObj = m_dynamicsWorld->getCollisionObjectArray()[i]; btRigidBody* body = btRigidBody::upcast(colObj); if(body && body->getMotionState()) { btDefaultMotionState* myMotionState = (btDefaultMotionState*)body->getMotionState(); myMotionState->m_graphicsWorldTrans.getOpenGLMatrix(m); } else { colObj->getWorldTransform().getOpenGLMatrix(m); } And after obtaining the matrix m, I paste it as model matrix. I am observing few things: I must add some weird "size / 2" to object's position, to have it drawed normally, I have following "up" look at vector defined: "0.0f, -1.0f, 0.0f" – basically, Y grows up, Z grows forward (to monitor), BUT – x grows LEFT, I think there is some conflict with the X direction.. I cannot obtain consistent positioning having world setup like this How to configure this in Bullet? Why the weird + size/2 requirement?

    Read the article

  • pygame.Rect around circle

    - by geekkid
    I'm trying to make a pong game in pygame , but i can't figure out how to but a ball circle , which i can create with pygame.draw.circle into a pygame.Rect object so i can use the colliderect function and manipulate the ball's position. For example, with rectangles, i can do something like this : rect = pygame.Rect(255, 255, 100, 100) pygame.draw.rect(screen, yellow, rect) and then when i change the pygame.Rect object position , the drawing primitives position also changes. How can the same effect be achieved when i want to draw a circle, instead of a rectangle? Thank you.

    Read the article

  • XNA VertexBuffer.SetData performance suggestions

    - by CodeSpeaker
    I have a 3d world in a grid layout where each grid cell contains its separate vertex and index buffer for the mesh/terrain of that cell. When the player moves outside the boundaries of his cell, i dynamically load more cells in his walking direction based on his viewing distance. This triggers x number of vertex and indexbuffer initializations depending on how many cells that needs to be generated and causes the framerate to drop annoyingly during this time. The generation of terrain data is handled in a separate thread and runs smoothly. The vertex and index buffers are added during the update cycle of the game loop. I´ve tried batching the number of cells to be processed to avoid sending too much data at once into the buffers, which worked ok at a shorter viewing distance (about 9 cells to process), but not as well at greater distances with around 30 cells to process. Any idea how i can optimize this?

    Read the article

< Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >