Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 437/1332 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Push back rectangle where collision happens

    - by Tifa
    I have a tile collision on a game I am creating but the problem is once a collision happens for example a collision happens in right side my sprite cant move to up and bottom :( thats because i set the speed to 0. I thinks its wrong. here is my code: int startX, startY, endX, endY; float pushx = 0,pushy = 0; // move player if(Gdx.input.isKeyPressed(Input.Keys.LEFT)){ dx=-1; currentWalk = leftWalk; } if(Gdx.input.isKeyPressed(Input.Keys.RIGHT)){ dx=1; currentWalk = rightWalk; } if(Gdx.input.isKeyPressed(Input.Keys.DOWN)){ dy=-1; currentWalk = downWalk; } if(Gdx.input.isKeyPressed(Input.Keys.UP)){ dy=1; currentWalk = upWalk; } sr.setProjectionMatrix(camera.combined); sr.begin(ShapeRenderer.ShapeType.Line); Rectangle koalaRect = rectPool.obtain(); koalaRect.set(player.getX(), player.getY(), pw, ph /2 ); float oldX = player.getX(), oldY = player.getY(); // THIS LINE WAS ADDED player.setXY(player.getX() + dx * Gdx.graphics.getDeltaTime() * 4f, player.getY() + dy * Gdx.graphics.getDeltaTime() * 4f); // THIS LINE WAS MOVED HERE FROM DOWN BELOW if(dx> 0) { startX = endX = (int)(player.getX() + pw); } else { startX = endX = (int)(player.getX() ); } startY = (int)(player.getY()); endY = (int)(player.getY() + ph); getTiles(startX, startY, endX, endY, tiles); for(Rectangle tile: tiles) { sr.rect(tile.x,tile.y,tile.getWidth(),tile.getHeight()); if(koalaRect.overlaps(tile)) { //dx = 0; player.setX(oldX); // THIS LINE CHANGED Gdx.app.log("x","hit " + player.getX() + " " + oldX); break; } } if(dy > 0) { startY = endY = (int)(player.getY() + ph ); } else { startY = endY = (int)(player.getY() ); } startX = (int)(player.getX()); endX = (int)(player.getX() + pw); getTiles(startX, startY, endX, endY, tiles); for(Rectangle tile: tiles) { if(koalaRect.overlaps(tile)) { //dy = 0; player.setY(oldY); // THIS LINE CHANGED //Gdx.app.log("y","hit" + player.getY() + " " + oldY); break; } } sr.rect(koalaRect.x,koalaRect.y,koalaRect.getWidth(),koalaRect.getHeight() / 2); sr.setColor(Color.GREEN); sr.end(); I want to push back the sprite when a collision happens but i have no idea how :D pls help

    Read the article

  • How can I test if an oriented rectangle contains another oriented rectangle?

    - by gronzzz
    I have the following situation: To detect whether is the red rectangle is inside orange area I use this function: - (BOOL)isTile:(CGPoint)tile insideCustomAreaMin:(CGPoint)min max:(CGPoint)max { if ((tile.x < min.x) || (tile.x > max.x) || (tile.y < min.y) || (tile.y > max.y)) { NSLog(@" Object is out of custom area! "); return NO; } return YES; } But what if I need to detect whether the red tile is inside of the blue rectangle? I wrote this function which uses the world position: - (BOOL)isTileInsidePlayableArea:(CGPoint)tile { // get world positions from tiles CGPoint rt = [[CoordinateFunctions shared] worldFromTile:ccp(24, 0)]; CGPoint lb = [[CoordinateFunctions shared] worldFromTile:ccp(24, 48)]; CGPoint worldTile = [[CoordinateFunctions shared] worldFromTile:tile]; return [self isTile:worldTile insideCustomAreaMin:ccp(lb.x, lb.y) max:ccp(rt.x, rt.y)]; } How could I do this without converting to the global position of the tiles?

    Read the article

  • In a Tower defense game, how to do buffs/debuffs

    - by Gabe
    The question is at the very bottom. If you understand Buffs/Debuffs in tower defense games then you should skip the bulk of this question and go to the bottom (seperated with the long line) I plan on making an IPhone TD game. The fact that its an iPhone game isn't relevant but I am coding in Objective-c with Cocos2D. I am relatively inexperienced in the field of game design so I'm looking for some advice from someone experienced in this field. In tower defense, there are two things that are relevant to my question: towers/enemies (both have their own classes/children). They each have stats like hp, damage, speed, etc. I want to add buffs/defuffs, for instance: Towers A,B and C each have 15 base damage. Tower D would be a buff tower with no damage, a tower with an AOE(area of effect) aura that gives 10% damage to all towers in range. Tower E might slow enemies in its AOE, a debuff. Stuff like that. The same could go for enemies. Enemy A is a boss that has a slow aura that affects towers and slows their base attack speed or something along those lines. So the question is, what would be the most effective way to implement this? If it was just towers then I would just mess around with the tower classes, but since tower classes and enemy classes are both affected, should I make a buff class? TD games can consume quite a bit of memory with large amounts of creeps and towers, and buffs I feel like would also consume quit a bit... So I'm trying to be as effective as possible.

    Read the article

  • which platform to choose for designing a game

    - by Pramod
    I am new to gaming platform and don't have any experience in gaming as well. I want to develop a small shooting game and don't have any idea from where to start and which platform to use like things. I have some experience in java and .net. Can anyone help me in giving me a start? I don't mind even if this question is voted down or closed. But please do help me. I've tried searching other similar questions but everyone is already into gaming and i can't get any of the words. Please refer me to some books or tutorials

    Read the article

  • How important is Programming for a Level Designer?

    - by WryGrin
    I'm currently attending school in a Level Design program, and I was wondering how important programming really is in being a Level Designer? I'm apparently incapable of learning programming (despite my best efforts), and tend to do very well in all other courses 3D modelling, story/character design, narrative and dialogue writing, environmental and conceptual design etc. I'm wondering if my strengths in the other areas are enough (with practice) to let me become a Level Designer, or I'm wasting my time if I can't program? I really want to be a Designer, but I just can't seem to wrap my head around the "language" of programming in general (Java kicks my teeth in even with tutoring and additional work on my own).

    Read the article

  • I prefer C/C++ over Unity and other tools: is it such a big downer for a game developper ?

    - by jokoon
    We have a big game project on Unity at school on which we are 12 to work on. My teacher seems to be convinced it's an important tool to teach students, since it makes students look from the high level to the lower level. I can understand his view, and I'm wondering: Is unity such an important engine in game developping companies ? Are there a lot of companies using it because they can't afford to use something else ? He is talking like Unity is a big player in game making, but I only see it fit small indie game companies who want to do a game as fast as possible. Do you think Unity is that much important in the industry ? Does it endangers the value of C++ skills ? It's not that I don't like Unity, it's just that I don't learn nothing with it, I prefer to achieve little steps with Ogre or SFML instead. Also, we also have C++ practice exercises, but those are just practice with theory, nothing much.

    Read the article

  • How to create a 3D world with 2D sprites similar to Ragnorak online?

    - by Romoku
    As far as I know Ragnorak Online is a 3D game world with 2D sprites overlayed. I would like to use this style in a game I am making in Unity, so I would like the player to be able to select little square tiles on the terrain. There are a couple routes I could take such as using a bunch of cubic polygons and linking them together or using one big map. The former approach doesn't seem to make any sense if the world is not flat as polygons wouldn't be reused often. The goal is to break down a 3D polygon into tiles which is heard to wrap my head around. I believe using something like an interval tree or array would be appropriate to store the rectangle grid, but how would I display a rectangle around the selection the player has his mouse over on the polygon terrain itself? Here is a screenshot. Here is a gameplay video. Here is the camera usage.

    Read the article

  • Game physics / 2D Collision detection AS3

    - by Jery
    I know there are some methods you can use like hittestPoint and so on, but I want to see where my movieclip colliedes with another another movieclip. Any other methods I can use? by any chance does somebody know some a good introduction to game physics? Im asking because I coded a small engine and pretty much the whole code is spagetti code thats why I would like to know how you can setup something like this properly

    Read the article

  • How Do I Search For Struct Items In A Vector? [migrated]

    - by Vladimir Marenus
    I'm attempting to create an inventory system using a vector implementation, but I seem to be having some troubles. I'm running into issues using a struct I made. NOTE: This isn't actually in a game code, this is a separate Solution I am using to test my knowledge of vectors and structs! struct aItem { string itemName; int damage; }; int main() { aItem healingPotion; healingPotion.itemName = "Healing Potion"; healingPotion.damage= 6; aItem fireballPotion; fireballPotion.itemName = "Potion of Fiery Balls"; fireballPotion.damage = -2; vector<aItem> inventory; inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(fireballPotion); if(find(inventory.begin(), inventory.end(), fireballPotion) != inventory.end()) { cout << "Found"; } system("PAUSE"); return 0; } The preceeding code gives me the following error: 1c:\program files (x86)\microsoft visual studio 11.0\vc\include\xutility(3186): error C2678: binary '==' : no operator found which takes a left-hand operand of type 'aItem' (or there is no acceptable conversion) There is more to the error, if you need it please let me know. I bet it's something small and silly, but I've been thumping at it for over two hours. Thanks in advance!

    Read the article

  • Convert rotation from Right handed System to left handed

    - by Hector Llanos
    I have Euler angles from a right handed system that I am trying to convert to a left handed system. All the information that I have read online says that to convert it simply multiply the axis and the angle in the correct order and it should work. In other words, Z * Y * X. When I do this what I see in Maya, and in engine still do not match up. This is what I have so far: static Quaternion ConvertToRightHand(Vector3 Euler) { Quaternion x = Quaternion.AngleAxis(-Euler.x, Vector3.right); Quaternion y = Quaternion.AngleAxis(Euler.y, Vector3.up); Quaternion z = Quaternion.AngleAxis(Euler.z, Vector3.forward); return (z * y * x); } Keeping the -Euler.x helps keep the object pointing up correctly, but when I pass ( 0,0,0) to face in the -z, it faces in the +z. Help :/

    Read the article

  • Any good C++ Component/Entity frameworks?

    - by Pat
    (Skip to the bold if you want to get straight to my question :) ) I've been dabbling in the different technologies available out there to use. I tried Unity and component based design, managing to get a little guy up and running around a map with basic pathfinding. I really loved how easy it was to program using components, but I wanted a bit more control and something more 2D friendly, so I went with LibGDX. I looked around and found 2 good frameworks for Java, which are Artemis and Apollo. I didn't like Artemis much, so I went with Apollo, which I loved. I managed to integrate it with Box2D and get a little guy running around bouncing balls. Great! But since I want to try out most of the options, there is still C++/SFML that I haven't tried yet. Coming from a Java/C# background, I've always wanted to get my hands dirty with C++. But then, after some looking around, I noticed there aren't any Component-Based frameworks for me to use. There's a somewhat done porting of Artemis, but, aside from not being completely finished, I didn't quite like Artemis even in Java. I found Apollo's approach much more.. logical. So, my question is, are there any good Component/Entity frameworks for C++ that I can use that are similar to Artemis, or preferably, Apollo?

    Read the article

  • Java - 2d Array Tile Map Collision

    - by Corey
    How would I go about making certain tiles in my array collide with my player? Like say I want every number 2 in the array to collide. I am reading my array from a txt file if that matters and I am using the slick2d library. Here is my code if needed. public class Tiles { Image[] tiles = new Image[3]; int[][] map = new int[500][500]; Image grass, dirt, mound; SpriteSheet tileSheet; int tileWidth = 32; int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.txt")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update() { } public void render(GameContainer gc) { for(int x = 0; x < 50; x++) { for(int y = 0; y < 50; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } I tried something like this, but I it doesn't ever "collide". X and y are my player position. if (tiles.map[(int)x/32][(int)y/32] == 2) { System.out.println("Collided"); }

    Read the article

  • multipass shadow mapping renderer in XNA

    - by Nick
    I am wanting to implement a multipass renderer in XNA (additive blending combines the contributions from each light). I have the renderer working without any shadows, but when I try to add shadow mapping support I run into an issue with switching render targets to draw the shadow maps. When I switch render targets, I lose the contents of the backbuffer which ruins the whole additive blending idea. For example: Draw() { DrawAmbientLighting() foreach (DirectionalLight) { DrawDirectionalShadowMap() // <-- I lose all previous lighting contributions when I switch to the shadow map render target here DrawDirectionalLighting() } } Is there any way around my issue? (I could render all the shadow maps first, but then I have to make and hold onto a render target for each light that casts a shadow--is this the only way?)

    Read the article

  • XNA GUI: Creating a 'scroll pane' widget

    - by Keith Myers
    I'm trying to create a simple GUI system and am currently stuck on how to implement a textarea with a scrollbar. In other words, the text is too large to fit into the view area. I want to learn how to do this, so I'd rather not use an already rolled API. I believe this could be done if the text were part of a texture, but if the game had a lot of unique dialog, this seems expensive. I researched creating a texture on the fly and writing to it, but came up with nothing. Any suggested strategies would be appreciated. I believe it boils down to: text in a texture and how? Or something I have not thought of...

    Read the article

  • What is the best Broadphase Interface for moving spheres?

    - by Molmasepic
    As of now I am working on optimizing the performance of the physics and collision, and as of now I am having some slowdowns on my other computers from my main. I have well over 3000 btSphereShape Rigidbodies and 2/3 of them do not move at all, but I am noticing(by the profile below) that collision is taking a bit of time to maneuver. Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 10.09 0.65 0.65 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 7.61 1.14 0.49 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.59 1.50 0.36 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 5.43 1.85 0.35 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 4.97 2.17 0.32 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 4.19 2.44 0.27 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 4.04 2.70 0.26 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 3.73 2.94 0.24 Ogre::OctreeSceneManager::walkOctree(Ogre::OctreeCamera*, Ogre::RenderQueue*, Ogre::Octree*, Ogre::VisibleObjectsBoundsInfo*, bool, bool) 3.42 3.16 0.22 btTriangleShape::getVertex(int, btVector3&) const 2.48 3.32 0.16 Ogre::Frustum::isVisible(Ogre::AxisAlignedBox const&, Ogre::FrustumPlane*) const 2.33 3.47 0.15 1246357 0.00 0.00 Gorilla::Layer::setVisible(bool) 2.33 3.62 0.15 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.86 3.74 0.12 btCollisionDispatcher::findAlgorithm(btCollisionObject*, btCollisionObject*, btPersistentManifold*) 1.86 3.86 0.12 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.71 3.97 0.11 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.55 4.07 0.10 _Unwind_SjLj_Register 1.55 4.17 0.10 _Unwind_SjLj_Unregister 1.55 4.27 0.10 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.40 4.36 0.09 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.40 4.45 0.09 btSequentialImpulseConstraintSolver::setupFrictionConstraint(btSolverConstraint&, btVector3 const&, btRigidBody*, btRigidBody*, btManifoldPoint&, btVector3 const&, btVector3 const&, btCollisionObject*, btCollisionObject*, float, float, float) 1.24 4.53 0.08 btSequentialImpulseConstraintSolver::convertContact(btPersistentManifold*, btContactSolverInfo const&) 1.09 4.60 0.07 408760 0.00 0.00 Living::MapHide() 1.09 4.67 0.07 btSphereTriangleCollisionAlgorithm::~btSphereTriangleCollisionAlgorithm() 1.09 4.74 0.07 inflate_fast EDIT: Updated to show current Profile. I have only listed the functions using over 1% time from the many functions that are being used. Another thing is that each monster has a certain area that they stay in and are only active when a player is in said area. I was wondering if maybe there is a way to deactivate the non-active monsters from bullet(reactivating once in the area again) or maybe theres a different broadphase interface that I should use. The current BPI is btDbvtBroadphase. EDIT: Here is the Profile on the other computer(the top one is my main) Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 12.18 1.19 1.19 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 6.76 1.85 0.66 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.83 2.42 0.57 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 5.12 2.92 0.50 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 4.61 3.37 0.45 btTriangleShape::getVertex(int, btVector3&) const 4.09 3.77 0.40 _Unwind_SjLj_Register 3.48 4.11 0.34 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 2.46 4.35 0.24 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 2.15 4.56 0.21 _Unwind_SjLj_Unregister 2.15 4.77 0.21 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.84 4.95 0.18 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.64 5.11 0.16 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 1.54 5.26 0.15 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.43 5.40 0.14 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.33 5.53 0.13 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.13 5.64 0.11 btRigidBody::predictIntegratedTransform(float, btTransform&) 1.13 5.75 0.11 btTriangleIndexVertexArray::getLockedReadOnlyVertexIndexBase(unsigned char const**, int&, PHY_ScalarType&, int&, unsigned char const**, int&, int&, PHY_ScalarType&, int) const 1.02 5.85 0.10 btSphereTriangleCollisionAlgorithm::CreateFunc::CreateCollisionAlgorithm(btCollisionAlgorithmConstructionInfo&, btCollisionObject*, btCollisionObject*) 1.02 5.95 0.10 btSphereTriangleCollisionAlgorithm::btSphereTriangleCollisionAlgorithm(btPersistentManifold*, btCollisionAlgorithmConstructionInfo const&, btCollisionObject*, btCollisionObject*, bool) Edited same as other Profile.

    Read the article

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • XNA Seeing through heightmap problem

    - by Jesse Emond
    I've recently started learning how to program in 3D with XNA and I've been trying to implement a Terrain3D class(a very simple height map). I've managed to draw a simple terrain, but I'm getting a weird bug where I can see through the terrain. This bug happens when I'm looking through a hill from the map. Here is a picture of what happens: I was wondering if this is a common mistake for starters and if any of you ever experienced the same problem and could tell me what I'm doing wrong. If it's not such an obvious problem, here is my Draw method: public override void Draw() { Parent.Engine.SpriteBatch.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); Camera3D cam = (Camera3D)Parent.Engine.Services.GetService(typeof(Camera3D)); if (cam == null) throw new Exception("Camera3D couldn't be found. Drawing a 3D terrain requires a 3D camera."); float triangleCount = indices.Length / 3f; basicEffect.Begin(); basicEffect.World = worldMatrix; basicEffect.View = cam.ViewMatrix; basicEffect.Projection = cam.ProjectionMatrix; basicEffect.VertexColorEnabled = true; Parent.Engine.GraphicsDevice.VertexDeclaration = new VertexDeclaration( Parent.Engine.GraphicsDevice, VertexPositionColor.VertexElements); foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Begin(); Parent.Engine.GraphicsDevice.Vertices[0].SetSource(vertexBuffer, 0, VertexPositionColor.SizeInBytes); Parent.Engine.GraphicsDevice.Indices = indexBuffer; Parent.Engine.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, (int)triangleCount); pass.End(); } basicEffect.End(); Parent.Engine.SpriteBatch.End(); } Parent is just a property holding the screen that the component belongs to. Engine is a property of that parent screen holding the engine that it belongs to. If I should post more code(like the initialization code), then just leave a comment and I will.

    Read the article

  • Cannot compute wNear and wFar from projection matrix

    - by DeadMG
    I've got the following error from Direct3D when attempting to render in 3D: Direct3D9: (WARN) :Cannot compute WNear and WFar from the supplied projection matrix Direct3D9: (WARN) :Setting wNear to 0.0 and wFar to 1.0 My projection matrix is as follows: D3DXMatrixPerspectiveFovLH( &Projection, D3DXToRadian(90), (float)GetDimensions().x / (float)GetDimensions().y, NearPlane, FarPlane ); D3DCALL(device->SetTransform( D3DTS_PROJECTION, &Projection )); The NearPlane is 0.1f, the FarPlane is 40.0f, and the dimensions are 1920x1018. This code was working earlier but I appear to have broken it, and I'm not sure where the fault is. Previously I've only encountered it if NearPlane was 0, and Google hasn't suggested any other causes either. Any suggestions?

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • OpenGL ES Basic Fragment Shader help with transparency

    - by Chris
    I have just spent my first half hour playing with the shader language. I have modified the basic program I have which renders the texture, to allow me to colour the texture. varying vec2 texCoord; uniform sampler2D texSampler; /* Given the texture coordinates, our pixel shader grabs the corresponding * color from the texture. */ void main() { //gl_FragColor = texture2D(texSampler, texCoord); gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).xyz,1); } I have noticed how this affects my transparent textures, and I believe I am loosing the alpha channel which would explain why previously transparent area's appear totally black. If I use the following line instead, I am shown the transparent area's gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).aaa,1); How can I retain the transparency after this modification to the colour? I have seen various things about a .w property, and also luminous, but my tweaks with those and the .aaa property are not working XD

    Read the article

  • Increase animation speed according to the swipe speed in unity for Android

    - by rohit
    I have the animation done through Maya and brought the FBX file to unity. Here is my code to calculate the speed of the swipe: Vector2 speedMeasuredInScreenWidthsPerSecond =(Input.touches[0].deltaPosition / Screen.width) * Input.touches[0].deltaTime; Now I wanted to take speedMeasuredInScreenWidthsPerSecond and use it to increase the animation speed accordingly like this: animation["gmeChaAnimMiddle"].speed=Mathf.Round(speedMeasuredInScreenWidthsPerSecond); However, this results in an error that I need to convert Vector2 to float. So how do I overcome it?

    Read the article

  • How should an object that uses composition set its composed components?

    - by Casey
    After struggling with various problems and reading up on component-based systems and reading Bob Nystrom's excellent book "Game Programming Patterns" and in particular the chapter on Components I determined that this is a horrible idea: //Class intended to be inherited by all objects. Engine uses Objects exclusively. class Object : public IUpdatable, public IDrawable { public: Object(); Object(const Object& other); Object& operator=(const Object& rhs); virtual ~Object() =0; virtual void SetBody(const RigidBodyDef& body); virtual const RigidBody* GetBody() const; virtual RigidBody* GetBody(); //Inherited from IUpdatable virtual void Update(double deltaTime); //Inherited from IDrawable virtual void Draw(BITMAP* dest); protected: private: }; I'm attempting to refactor it into a more manageable system. Mr. Nystrom uses the constructor to set the individual components; CHANGING these components at run-time is impossible. It's intended to be derived and be used in derivative classes or factory methods where their constructors do not change at run-time. i.e. his Bjorne object is just a call to a factory method with a specific call to the GameObject constructor. Is this a good idea? Should the object have a default constructor and setters to facilitate run-time changes or no default constructor without setters and instead use a factory method? Given: class Object { public: //...See below for constructor implementation concerns. Object(const Object& other); Object& operator=(const Object& rhs); virtual ~Object() =0; //See below for Setter concerns IUpdatable* GetUpdater(); IDrawable* GetRenderer(); protected: IUpdatable* _updater; IDrawable* _renderer; private: }; Should the components be read-only and passed in to the constructor via: class Object { public: //No default constructor. Object(IUpdatable* updater, IDrawable* renderer); //...remainder is same as above... }; or Should a default constructor be provided and then the components can be set at run-time? class Object { public: Object(); //... SetUpdater(IUpdater* updater); SetRenderer(IDrawable* renderer); //...remainder is same as above... }; or both? class Object { public: Object(); Object(IUpdater* updater, IDrawable* renderer); //... SetUpdater(IUpdater* updater); SetRenderer(IDrawable* renderer); //...remainder is same as above... };

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Level Creating Help

    - by Brandon oubiub
    I am making a little 2d overhead RPG type game just for fun. I have almost all the basic stuff set up, but I just need a little help on level creation. I can already make a level and place each tile how I want it, but having to place each tile gets annoying after a while. I noticed that in a lot of games, even extremely simple ones, they have LOTS of levels with LOTS of tiles in each. Creating all that in this fashion would take forever. So I guess my question is, as a game developer, am I supposed to do all that, or maybe make a little level editor so I can see things as I create it? What do game developers do? I'm using Java. EDIT: Okay, say if I had an image for a map, that I made in MS paint or photoshop, and each pixel represent a tile value, could I somehow in Java detect what color an individual pixel is? If so, that would be perfect. If so, how?

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >