Search Results

Search found 37616 results on 1505 pages for 'model driven development'.

Page 533/1505 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Avoid if statements in DirectX 10 shaders?

    - by PolGraphic
    I have heard that if statements should be avoid in shaders, because both parts of the statements will be execute, and than the wrong will be dropped (which harms the performance). It's still a problem in DirectX 10? Somebody told me, that in it only the right branch will be execute. For the illustration I have the code: float y1 = 5; float y2 = 6; float b1 = 2; float b2 = 3; if(x>0.5){ x = 10 * y1 + b1; }else{ x = 10 * y2 + b2; } Is there an other way to make it faster? If so, how do it? Both branches looks similar, the only difference is the values of "constants" (y1, y2, b1, b2 are the same for all pixels in Pixel Shader).

    Read the article

  • convert image to spritesheet of tiles for isometric map?

    - by Paul
    is there a way to convert an isometric image (like the first image) to a spritesheet (like the second image), in order to place each image on the isometric map with the code? The map looks like the first image, but some buildings are bigger than just one tile, so I need several squares (let's say the first image is a building, made of multiple tiles with different colors), and each square is placed with an offset of 64x32. The building is created in Blender and I save the image with the isometric perspective. But I have to split each square from this image in order to have the spritesheet, maybe there is smarter way, or a java software that would make the conversion for me?

    Read the article

  • How to position a sprite in a 2D animation skeleton?

    - by Paul Manta
    Given two joints that define a bone, I would like to know how to decide where, between those two joints, I should draw the sprite. This should be a fairly simple thing to solve, but there is one thing that I am not sure about. After I've determined the rotation of the sprite (which is the absolute angle the joints form with the x-axis), I also need to determine the origin point from where I need to start drawing the transformed image. So how should I position the sprite between the two joints? Should I make the center of the image be the midpoint between the two joints, or should I make one the of the joints be the origin? Do these things matter that much (could the wrong positioning make the sprite move oddly during the animation)?

    Read the article

  • Electronic circuit simulator four-way flood-filling issues

    - by AJ Weeks
    I've made an electronic circuit board simulator which has simply 3 types of tiles: wires, power sources, and inverters. Wires connect to anything they touch, other than the sides of inverters; inverters have one input side and one output side; and finally power tiles connect in a similar manner as wires. In the case of an infinite loop, caused by the output of the inverter feeding into its input, I want inverters to oscillate (quickly turn on/off). I've attempted to implement a FloodFill algorithm to spread the power throughout the grid, but seem to have gotten something wrong, as only the tiles above the power source get powered (as seen below) I've attempted to debug the program, but have had no luck thus far. My code concerning the updating of power can be seen here.

    Read the article

  • How to make a ball fall faster on a ramp?

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.)

    Read the article

  • Creating a retro-style palette swapping effect in OpenGL

    - by Zack The Human
    I'm working on a Megaman-like game where I need to change the color of certain pixels at runtime. For reference: in Megaman when you change your selected weapon then main character's palette changes to reflect the selected weapon. Not all of the sprite's colors change, only certain ones do. This kind of effect was common and quite easy to do on the NES since the programmer had access to the palette and the logical mapping between pixels and palette indices. On modern hardware, though, this is a bit more challenging because the concept of palettes is not the same. All of my textures are 32-bit and do not use palettes. There are two ways I know of to achieve the effect I want, but I'm curious if there are better ways to achieve this effect easily. The two options I know of are: Use a shader and write some GLSL to perform the "palette swapping" behavior. If shaders are not available (say, because the graphics card doesn't support them) then it is possible to clone the "original" textures and generate different versions with the color changes pre-applied. Ideally I would like to use a shader since it seems straightforward and requires little additional work opposed to the duplicated-texture method. I worry that duplicating textures just to change a color in them is wasting VRAM -- should I not worry about that?

    Read the article

  • Problem with drawing textures in OpenGL ES

    - by droidmachine
    I'm developing a 2D game for Android and i'm using the framework which has been told in the book which named Beginning Android Games by Mario Zechner.So my framework is well designed and using OpenGL 1.1.It's similar to libgdx. When i put my textures adjacent each other in my 2d surface,there are some spaces size as 1 px.But this problem only occur on my tablet.There aren't a problem like this on my phone.It's like in this picture: What can be the problem?I can't fix it from one week.

    Read the article

  • XNA Seeing through heightmap problem

    - by Jesse Emond
    I've recently started learning how to program in 3D with XNA and I've been trying to implement a Terrain3D class(a very simple height map). I've managed to draw a simple terrain, but I'm getting a weird bug where I can see through the terrain. This bug happens when I'm looking through a hill from the map. Here is a picture of what happens: I was wondering if this is a common mistake for starters and if any of you ever experienced the same problem and could tell me what I'm doing wrong. If it's not such an obvious problem, here is my Draw method: public override void Draw() { Parent.Engine.SpriteBatch.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); Camera3D cam = (Camera3D)Parent.Engine.Services.GetService(typeof(Camera3D)); if (cam == null) throw new Exception("Camera3D couldn't be found. Drawing a 3D terrain requires a 3D camera."); float triangleCount = indices.Length / 3f; basicEffect.Begin(); basicEffect.World = worldMatrix; basicEffect.View = cam.ViewMatrix; basicEffect.Projection = cam.ProjectionMatrix; basicEffect.VertexColorEnabled = true; Parent.Engine.GraphicsDevice.VertexDeclaration = new VertexDeclaration( Parent.Engine.GraphicsDevice, VertexPositionColor.VertexElements); foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Begin(); Parent.Engine.GraphicsDevice.Vertices[0].SetSource(vertexBuffer, 0, VertexPositionColor.SizeInBytes); Parent.Engine.GraphicsDevice.Indices = indexBuffer; Parent.Engine.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, (int)triangleCount); pass.End(); } basicEffect.End(); Parent.Engine.SpriteBatch.End(); } Parent is just a property holding the screen that the component belongs to. Engine is a property of that parent screen holding the engine that it belongs to. If I should post more code(like the initialization code), then just leave a comment and I will.

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    I've read lots of articles about Data Oriented Design (DOD) and I understand it but I can't design an Object Oriented Programming (OOP) system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OOP interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • Strange mesh import problem with Assimp and OpenGL

    - by Morgan
    Using the assimp library for importing 3D data into an OpenGL application. I get some strange problems regarding indexing of the vertices: If I use the following code for importing vertex indices: for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } I get the following result: Instead, if I use the following code: for(int k = 0; k < 2 ; k++) { for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } } I get the correct result: Hence adding the indices twice, renders the correct result? The OpenGL buffer is populated, like so: glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices->size() * sizeof(unsigned int), indices->data(), GL_STATIC_DRAW); And rendered as follows: glDrawElements(GL_TRIANGLES, vertexCount*3, GL_UNSIGNED_INT, indices->data());

    Read the article

  • How does a segment based rendering engine work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Box2D networking

    - by spacevillain
    I am trying to make a simple sync between two box2d rooms, where you can drag boxes using the mouse. So every time player clicks (and holds the mousedown) a box, I try send joint parameters to server, and server sends them to other clients. When mouseup occurs, I send command to delete joint. The problem is that sync breaks too often. Is my way radically wrong, or it just needs some tweaks? http://www.youtube.com/watch?v=eTN2Gwj6_Lc Source code https://github.com/agentcooper/Box2d-networking

    Read the article

  • Increase animation speed according to the swipe speed in unity for Android

    - by rohit
    I have the animation done through Maya and brought the FBX file to unity. Here is my code to calculate the speed of the swipe: Vector2 speedMeasuredInScreenWidthsPerSecond =(Input.touches[0].deltaPosition / Screen.width) * Input.touches[0].deltaTime; Now I wanted to take speedMeasuredInScreenWidthsPerSecond and use it to increase the animation speed accordingly like this: animation["gmeChaAnimMiddle"].speed=Mathf.Round(speedMeasuredInScreenWidthsPerSecond); However, this results in an error that I need to convert Vector2 to float. So how do I overcome it?

    Read the article

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • How can I test if an oriented rectangle contains another oriented rectangle?

    - by gronzzz
    I have the following situation: To detect whether is the red rectangle is inside orange area I use this function: - (BOOL)isTile:(CGPoint)tile insideCustomAreaMin:(CGPoint)min max:(CGPoint)max { if ((tile.x < min.x) || (tile.x > max.x) || (tile.y < min.y) || (tile.y > max.y)) { NSLog(@" Object is out of custom area! "); return NO; } return YES; } But what if I need to detect whether the red tile is inside of the blue rectangle? I wrote this function which uses the world position: - (BOOL)isTileInsidePlayableArea:(CGPoint)tile { // get world positions from tiles CGPoint rt = [[CoordinateFunctions shared] worldFromTile:ccp(24, 0)]; CGPoint lb = [[CoordinateFunctions shared] worldFromTile:ccp(24, 48)]; CGPoint worldTile = [[CoordinateFunctions shared] worldFromTile:tile]; return [self isTile:worldTile insideCustomAreaMin:ccp(lb.x, lb.y) max:ccp(rt.x, rt.y)]; } How could I do this without converting to the global position of the tiles?

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • Networking Client Server Packet logic (How they communicate)

    - by Trixmix
    I want to know what is the logic behind server client communication through packets for a real time game. for example the server sends x packets then the client receives x packets and processes them.. Basically what is the process to keep the client and server in sync and able to receive and send packets. more in depth example of what I want to know: client step 1 wait for a packet step 2 read x packets step 3 process x packets step 4 send x packets and so on... I need to know the very basic outline of the communication. Big questions are: 1) do I send and read packets all at one time? i.e for loop though the incoming packets array list and read them all or one every server loop or what... 2) what order should I do things i.e first receive then read then process then send etc.. 3) what I asked above a step by step of what the server / client should do.. Thanks!

    Read the article

  • What is the best Broadphase Interface for moving spheres?

    - by Molmasepic
    As of now I am working on optimizing the performance of the physics and collision, and as of now I am having some slowdowns on my other computers from my main. I have well over 3000 btSphereShape Rigidbodies and 2/3 of them do not move at all, but I am noticing(by the profile below) that collision is taking a bit of time to maneuver. Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 10.09 0.65 0.65 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 7.61 1.14 0.49 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.59 1.50 0.36 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 5.43 1.85 0.35 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 4.97 2.17 0.32 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 4.19 2.44 0.27 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 4.04 2.70 0.26 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 3.73 2.94 0.24 Ogre::OctreeSceneManager::walkOctree(Ogre::OctreeCamera*, Ogre::RenderQueue*, Ogre::Octree*, Ogre::VisibleObjectsBoundsInfo*, bool, bool) 3.42 3.16 0.22 btTriangleShape::getVertex(int, btVector3&) const 2.48 3.32 0.16 Ogre::Frustum::isVisible(Ogre::AxisAlignedBox const&, Ogre::FrustumPlane*) const 2.33 3.47 0.15 1246357 0.00 0.00 Gorilla::Layer::setVisible(bool) 2.33 3.62 0.15 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.86 3.74 0.12 btCollisionDispatcher::findAlgorithm(btCollisionObject*, btCollisionObject*, btPersistentManifold*) 1.86 3.86 0.12 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.71 3.97 0.11 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.55 4.07 0.10 _Unwind_SjLj_Register 1.55 4.17 0.10 _Unwind_SjLj_Unregister 1.55 4.27 0.10 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.40 4.36 0.09 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.40 4.45 0.09 btSequentialImpulseConstraintSolver::setupFrictionConstraint(btSolverConstraint&, btVector3 const&, btRigidBody*, btRigidBody*, btManifoldPoint&, btVector3 const&, btVector3 const&, btCollisionObject*, btCollisionObject*, float, float, float) 1.24 4.53 0.08 btSequentialImpulseConstraintSolver::convertContact(btPersistentManifold*, btContactSolverInfo const&) 1.09 4.60 0.07 408760 0.00 0.00 Living::MapHide() 1.09 4.67 0.07 btSphereTriangleCollisionAlgorithm::~btSphereTriangleCollisionAlgorithm() 1.09 4.74 0.07 inflate_fast EDIT: Updated to show current Profile. I have only listed the functions using over 1% time from the many functions that are being used. Another thing is that each monster has a certain area that they stay in and are only active when a player is in said area. I was wondering if maybe there is a way to deactivate the non-active monsters from bullet(reactivating once in the area again) or maybe theres a different broadphase interface that I should use. The current BPI is btDbvtBroadphase. EDIT: Here is the Profile on the other computer(the top one is my main) Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 12.18 1.19 1.19 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 6.76 1.85 0.66 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.83 2.42 0.57 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 5.12 2.92 0.50 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 4.61 3.37 0.45 btTriangleShape::getVertex(int, btVector3&) const 4.09 3.77 0.40 _Unwind_SjLj_Register 3.48 4.11 0.34 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 2.46 4.35 0.24 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 2.15 4.56 0.21 _Unwind_SjLj_Unregister 2.15 4.77 0.21 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.84 4.95 0.18 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.64 5.11 0.16 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 1.54 5.26 0.15 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.43 5.40 0.14 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.33 5.53 0.13 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.13 5.64 0.11 btRigidBody::predictIntegratedTransform(float, btTransform&) 1.13 5.75 0.11 btTriangleIndexVertexArray::getLockedReadOnlyVertexIndexBase(unsigned char const**, int&, PHY_ScalarType&, int&, unsigned char const**, int&, int&, PHY_ScalarType&, int) const 1.02 5.85 0.10 btSphereTriangleCollisionAlgorithm::CreateFunc::CreateCollisionAlgorithm(btCollisionAlgorithmConstructionInfo&, btCollisionObject*, btCollisionObject*) 1.02 5.95 0.10 btSphereTriangleCollisionAlgorithm::btSphereTriangleCollisionAlgorithm(btPersistentManifold*, btCollisionAlgorithmConstructionInfo const&, btCollisionObject*, btCollisionObject*, bool) Edited same as other Profile.

    Read the article

  • Cannot compute wNear and wFar from projection matrix

    - by DeadMG
    I've got the following error from Direct3D when attempting to render in 3D: Direct3D9: (WARN) :Cannot compute WNear and WFar from the supplied projection matrix Direct3D9: (WARN) :Setting wNear to 0.0 and wFar to 1.0 My projection matrix is as follows: D3DXMatrixPerspectiveFovLH( &Projection, D3DXToRadian(90), (float)GetDimensions().x / (float)GetDimensions().y, NearPlane, FarPlane ); D3DCALL(device->SetTransform( D3DTS_PROJECTION, &Projection )); The NearPlane is 0.1f, the FarPlane is 40.0f, and the dimensions are 1920x1018. This code was working earlier but I appear to have broken it, and I'm not sure where the fault is. Previously I've only encountered it if NearPlane was 0, and Google hasn't suggested any other causes either. Any suggestions?

    Read the article

  • Vector Graphics in DirectX

    - by Doug
    I'm curious as to people's thoughts on the best way to use vector graphics in a directX game instead of rasterized textures(think Super Meat Boy). I want to remain resolution independent and don't want to downscale/upscale rasterized graphics. Also the idea would be for all assets to be vector graphics(again think Super Meat Boy). I've looked at Valve's paper "Improved Alpha-Tested Magnification for Vector Textures and Special Effects" and also looked at using shaders http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html. Wondering if anyone has done something similar or an alternate approach. Cheers

    Read the article

  • Game programming course materials: What should it include?

    - by Esa
    I am tasked to create the course materials for a game programming class, and I’d like your opinion on what aspects and areas of game programming, such as game state management, game object storing or simple AI, should I include in it? The course is intented to be the first step into game programming for students with novice skills in programming. There will be mathematics as well, but I found that there are multiple questions, with good answers, on that subject already.

    Read the article

  • General visual effects to meshes/entities

    - by Pacha
    I am trying to add some visual effects to some entities, meshes, or whatever you want to call them as they are looking pretty dull in my game right now. What I want to achieve is this: http://youtu.be/zox8935PLw0?t=36s (the "texture" gets disintegrated and then goes back to normal, covering the whole mesh.) Also I would like to know what is the best way to add effects like the one in the video to my game (for example, thunder effects, shattering, etc.) I know that I can do some things with shaders, but I haven't learned them too well and I am still in a beginner level. I am using Ogre3D, and GLSL for shaders. Thanks! Note: this is a screen-shot of my game, I want to apply the effect in the video to my main character):

    Read the article

  • Making game constants/tables available to game logic classes/routines in a modular manner

    - by Extrakun
    Suppose I have a game where there are several predefined constants and charts (a XP chart, cost of goods and so on). Those could be defined at runtime, or load from files at start-up. The question is how should those logic routines access the constants and charts? For example, I could try using global variables, but that cause all classes relying on the variables to be tightly coupled with them.

    Read the article

  • Examples of interesting implementations of character stats?

    - by Tchalvak
    I've got this BBG going ( http://ninjawars.net ), and the character stats currently are simplistic. I'm looking to add a few stats to the current 1/2 (strength and maximum hitpoints, essentially). I've come up with: (strength (unchanged), speed, stamina, and some others that are somewhat interesting wildcard stats). However, I'm not satisfied with how boring the effects of some of these stats are, because they're very linear. Better stat, better effects of the stat, but the stats don't interact with each-other, there's no Rock-Paper-Scissors interaction, having more is always better all the time. So what I'd really like is to see examples of interesting character stats or effects of stats? Examples that I can think of off hand: Call of Cthulu's Insanity stat (things get really weird/chaotic if you start losing sanity) White Wolf stats, to a certain extent (the stats themselves have some basic effects, and all skills effectiveness base themselves off of stats as well) What are some other ways people have used stats to check out?

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >