Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 507/1274 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • What program should i use for Ludum Dare?

    - by mFontoura
    I want to participate for the first time on Ludum Dare, but i'm not confortable yet with a language to pick one for making a game on a weekend. So i was looking for a program 'gamemaker' style, just to make something for LD. I was going for Construct 2, but i use Linux and they don't have a linux version. So the alternative i use is Stencyl, witch is great and probably is what i'm going to use. However, i wanted to know if there is something similar and better for Linux. Also, if i get a computer with Win8, is it worth the trouble for Construct 2?

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • what tool should I use for drawing 2d OpenGL shapes?

    - by Kenny Winker
    I'm working on a very simple OpenGL ES 2.0 game, and I'm not sure what tool to use to create the vertex data I need. My first thought was adobe illustrator, but I can't seem to google up any info on how to convert an .ai file to vertices. I'm only going to be using very simple 2d shapes so I wonder if I need to use a 3d modelling program? How is this typically done, when you are working with 2d non-sprite shapes?

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Fixed timestep and interpolation question

    - by Eric
    I'm following Glenn Fiedlers excellent Fix Your Timestep! tutorial to step my 2D game. The problem I'm facing is in the interpolation phase in the end. My game has a Tween-function which lets me tween properties of my game entites. Properties such as scale, shear, position, color, rotation etc. Im curious of how I'd interpolate these values, since there's a lot of them. My first thought is to keep a previous value of every property (colorPrev, scalePrev etc.), and interpolate between those. Is this the correct method? To interpolate my characters I use their velocity; renderPostion = position + (velocity * interpolation), but I cannot apply that to color for example. So what is the desired method to interpolate various properties or a entity? Is there any rule of thumb to use?

    Read the article

  • Direct3d - Code structure

    - by marcg11
    I'm learning directx in a master's degree and they taught us to have a GraphicsLayer class which is the one connecting with the direct3d library. That way this class is completly independent from the other classes (my game classes), meaning changing the renderer to OpenGL wouldn't require much effort but only changing the graphicLayer. This classe has it's LoadAssets, Paint methods, but I have a question, they told us to load all the assets inside this class. This means all these methods will be in the loadAssets method: D3DXCreateTextureFromFileEx(g_pD3DDevice,"tiles.png",0,0,1,0,D3DFMT_UNKNOWN,D3DPOOL_DEFAULT,D3DX_FILTER_NONE,D3DX_FILTER_NONE,NULL,NULL,NULL,&texTiles); // And more resources to load //... texTiles as you see is a LPDIRECT3DTEXTURE9 instance which is declared in the graphicLayer.h. So my question is, how do you manage all the resources? Do I have to declare in the .h all my game textures even if I'm not using them? How would you load only those resources there are in a scene and draw them in a code-strucured way?

    Read the article

  • Looking for 2D Cross platform suggestions based on requirements specified

    - by MannyG
    I am an intermediate developer with minor experience on enterprise mobile applications for iphone, android and blackberry looking to build my first ever mobile game. I did a google search for some game dev forums and this popped up so I thought I would try posting here as I lack luck elsewhere. If you have ever heard of the game for the iphone and android platform entitled avatar fight then you will have an idea of the graphic capabilities I require. Basically the battles which are automated one sprite attacking another doing cool animations but all in 2d. My buddy and I have two motivations, one is to jump into mobile Dev as my experience is limited as is his so we would like some trending knowledge (html5 would be nice to learn) . The other is to make some money on the side, don't expect much but polishing the game and putting our all will hopefully reward us a bit. We have looked into corona engine, however a lot of people are saying it is limited in the graphics department, we are open to learning new languages like lua, c++, python etc. Others we have looked at include phonegap, rhomobile, unity, and the list goes on. I really have no idea what the pros and cons of these are but for a basic battle sequence and some mini games we want to chose the right one. Some more things that we will be doing include things like card games, side scrolling flying object based games, maybe fishing stuff. We want to start small with these minigames and work our way up to the idea we would like to implement in the future. We only want to work in 2D. So with these requirements please help me chose a platform to work on (cross platform is what we are ideally leaning towards). Please feel free to throw in some pieces of advice you may have for newbie game developers like myself too. Thank you for reading!

    Read the article

  • How can I test if my rotated rectangle intersects a corner?

    - by Raven Dreamer
    I have a square, tile-based collision map. To check if one of my (square) entities is colliding, I get the vertices of the 4 corners, and test those 4 points against my collision map. If none of those points are intersecting, I know I'm good to move to the new position. I'd like to allow entities to rotate. I can still calculate the 4 corners of the square, but once you factor in rotation, those 4 corners alone don't seem to be enough information to determine if the entity is trying to move to a valid state. For example: In the picture below, black is unwalkable terrain, and red is the player's hitbox. The left scenario is allowed because the 4 corners of the red square are not in the black terrain. The right scenario would also be (incorrectly) allowed, because the player, cleverly turned at a 45* angle, has its corners in valid spaces, even if it is (quite literally) cutting the corner. How can I detect scenarios similar to the situation on the right?

    Read the article

  • Arrive steering behavior

    - by dbostream
    I bought a book called Programming game AI by example and I am trying to implement the arrive steering behavior. The problem I am having is that my objects oscillate around the target position; after oscillating less and less for awhile they finally come to a stop at the target position. Does anyone have any idea why this oscillating behavior occur? Since the examples accompanying the book are written in C++ I had to rewrite the code into C#. Below is the relevant parts of the steering behavior: private enum Deceleration { Fast = 1, Normal = 2, Slow = 3 } public MovingEntity Entity { get; private set; } public Vector2 SteeringForce { get; private set; } public Vector2 Target { get; set; } public Vector2 Calculate() { SteeringForce.Zero(); SteeringForce = SumForces(); SteeringForce.Truncate(Entity.MaxForce); return SteeringForce; } private Vector2 SumForces() { Vector2 force = new Vector2(); if (Activated(BehaviorTypes.Arrive)) { force += Arrive(Target, Deceleration.Slow); if (!AccumulateForce(force)) return SteeringForce; } return SteeringForce; } private Vector2 Arrive(Vector2 target, Deceleration deceleration) { Vector2 toTarget = target - Entity.Position; double distance = toTarget.Length(); if (distance > 0) { //because Deceleration is enumerated as an int, this value is required //to provide fine tweaking of the deceleration.. double decelerationTweaker = 0.3; double speed = distance / ((double)deceleration * decelerationTweaker); speed = Math.Min(speed, Entity.MaxSpeed); Vector2 desiredVelocity = toTarget * speed / distance; return desiredVelocity - Entity.Velocity; } return new Vector2(); } private bool AccumulateForce(Vector2 forceToAdd) { double magnitudeRemaining = Entity.MaxForce - SteeringForce.Length(); if (magnitudeRemaining <= 0) return false; double magnitudeToAdd = forceToAdd.Length(); if (magnitudeToAdd > magnitudeRemaining) magnitudeToAdd = magnitudeRemaining; SteeringForce += Vector2.NormalizeRet(forceToAdd) * magnitudeToAdd; return true; } This is the update method of my objects: public void Update(double deltaTime) { Vector2 steeringForce = Steering.Calculate(); Vector2 acceleration = steeringForce / Mass; Velocity = Velocity + acceleration * deltaTime; Velocity.Truncate(MaxSpeed); Position = Position + Velocity * deltaTime; } If you want to see the problem with your own eyes you can download a minimal example here. Thanks in advance.

    Read the article

  • Friction not working for Vehicle in BulletPhysics

    - by Manmohan Bishnoi
    I am creating a vehicle using bullet-physics engine (v 2.82). I created a ground ( btBoxShape ), a box and a vehicle (following the demo). But friction between ground and vehicle wheels seems not working. As soon as the vehicle is placed in 3d world, it starts moving forward. START : Steering works for the vehicle, but engineForce and brakingForce does not work (i.e. I cannot speed-up or stop the vehicle) : I create physics world like this : void initPhysics() { broadphase = new btDbvtBroadphase(); collisionConfiguration = new btDefaultCollisionConfiguration(); dispatcher = new btCollisionDispatcher(collisionConfiguration); solver = new btSequentialImpulseConstraintSolver(); dynamicsWorld = new btDiscreteDynamicsWorld(dispatcher, broadphase, solver, collisionConfiguration); dynamicsWorld->setGravity(btVector3(0, -9.81, 0)); // Debug Drawer bulletDebugugger.setDebugMode(btIDebugDraw::DBG_DrawWireframe); dynamicsWorld->setDebugDrawer(&bulletDebugugger); //groundShape = new btStaticPlaneShape(btVector3(0, 1, 0), 1); groundShape = new btBoxShape(btVector3(50, 3, 50)); fallShape = new btBoxShape(btVector3(1, 1, 1)); // Orientation and Position of Ground groundMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(0, -3, 0))); btRigidBody::btRigidBodyConstructionInfo groundRigidBodyCI(0, groundMotionState, groundShape, btVector3(0, 0, 0)); groundRigidBody = new btRigidBody(groundRigidBodyCI); dynamicsWorld->addRigidBody(groundRigidBody); /////////////////////////////////////////////////////////////////////// // Vehicle Setup /////////////////////////////////////////////////////////////////////// vehicleChassisShape = new btBoxShape(btVector3(1.f, 0.5f, 2.f)); vehicleBody = new btCompoundShape(); localTrans.setIdentity(); localTrans.setOrigin(btVector3(0, 1, 0)); vehicleBody->addChildShape(localTrans, vehicleChassisShape); localTrans.setOrigin(btVector3(3, 0.f, 0)); vehicleMotionState = new btDefaultMotionState(localTrans); //vehicleMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(3, 0, 0))); btVector3 vehicleInertia(0, 0, 0); vehicleBody->calculateLocalInertia(vehicleMass, vehicleInertia); btRigidBody::btRigidBodyConstructionInfo vehicleRigidBodyCI(vehicleMass, vehicleMotionState, vehicleBody, vehicleInertia); vehicleRigidBody = new btRigidBody(vehicleRigidBodyCI); dynamicsWorld->addRigidBody(vehicleRigidBody); wheelShape = new btCylinderShapeX(btVector3(wheelWidth, wheelRadius, wheelRadius)); { vehicleRayCaster = new btDefaultVehicleRaycaster(dynamicsWorld); vehicle = new btRaycastVehicle(vehicleTuning, vehicleRigidBody, vehicleRayCaster); // never deactivate vehicle vehicleRigidBody->setActivationState(DISABLE_DEACTIVATION); dynamicsWorld->addVehicle(vehicle); float connectionHeight = 1.2f; bool isFrontWheel = true; vehicle->setCoordinateSystem(rightIndex, upIndex, forwardIndex); // 0, 1, 2 // add wheels // front left btVector3 connectionPointCS0(CUBE_HALF_EXTENT-(0.3*wheelWidth), connectionHeight, 2*CUBE_HALF_EXTENT-wheelRadius); vehicle->addWheel(connectionPointCS0, wheelDirectionCS0, wheelAxleCS, suspensionRestLength, wheelRadius, vehicleTuning, isFrontWheel); // front right connectionPointCS0 = btVector3(-CUBE_HALF_EXTENT+(0.3*wheelWidth), connectionHeight, 2*CUBE_HALF_EXTENT-wheelRadius); vehicle->addWheel(connectionPointCS0, wheelDirectionCS0, wheelAxleCS, suspensionRestLength, wheelRadius, vehicleTuning, isFrontWheel); isFrontWheel = false; // rear right connectionPointCS0 = btVector3(-CUBE_HALF_EXTENT+(0.3*wheelWidth), connectionHeight, -2*CUBE_HALF_EXTENT+wheelRadius); vehicle->addWheel(connectionPointCS0, wheelDirectionCS0, wheelAxleCS, suspensionRestLength, wheelRadius, vehicleTuning, isFrontWheel); // rear left connectionPointCS0 = btVector3(CUBE_HALF_EXTENT-(0.3*wheelWidth), connectionHeight, -2*CUBE_HALF_EXTENT+wheelRadius); vehicle->addWheel(connectionPointCS0, wheelDirectionCS0, wheelAxleCS, suspensionRestLength, wheelRadius, vehicleTuning, isFrontWheel); for (int i = 0; i < vehicle->getNumWheels(); i++) { btWheelInfo& wheel = vehicle->getWheelInfo(i); wheel.m_suspensionStiffness = suspensionStiffness; wheel.m_wheelsDampingRelaxation = suspensionDamping; wheel.m_wheelsDampingCompression = suspensionCompression; wheel.m_frictionSlip = wheelFriction; wheel.m_rollInfluence = rollInfluence; } } /////////////////////////////////////////////////////////////////////// // Orientation and Position of Falling body fallMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(-1, 5, 0))); btScalar mass = 1; btVector3 fallInertia(0, 0, 0); fallShape->calculateLocalInertia(mass, fallInertia); btRigidBody::btRigidBodyConstructionInfo fallRigidBodyCI(mass, fallMotionState, fallShape, fallInertia); fallRigidBody = new btRigidBody(fallRigidBodyCI); dynamicsWorld->addRigidBody(fallRigidBody); } I step physics world like this : // does not work vehicle->applyEngineForce(maxEngineForce, WHEEL_REARLEFT); vehicle->applyEngineForce(maxEngineForce, WHEEL_REARRIGHT); // these also do not work vehicle->setBrake(gBreakingForce, WHEEL_REARLEFT); vehicle->setBrake(gBreakingForce, WHEEL_REARRIGHT); // this works vehicle->setSteeringValue(gVehicleSteering, WHEEL_FRONTLEFT); vehicle->setSteeringValue(gVehicleSteering, WHEEL_FRONTRIGHT); dynamicsWorld->stepSimulation(1 / 60.0f, 10); However If I apply brakingForce to all 4 wheels (i.e. including WHEEL_FRONTLEFT and WHEEL_FRONTRIGHT), then my vehicle stops, but keeps sliding/moving forward very very slowly. How do I fix this ?

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Voice artist for a game for kids

    - by devmiles.com
    We're making a game for kids which should include about 50 spoken phrases. I'm asking for help in finding the right voice artist / studio for this. I've tried searching the web but couldn't find anything that would make me sure that it would work for us or games in general. So I'm looking for references from those of you who had a successful collaboration with artists or studios. Any help would be appreciated.

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • Importing a windows project into android using cocos2d-x

    - by Ef Es
    What I am trying to do today is to import a full project to Android, but no tutorials are available for that that I have seen. My approach was to create a new android project, copy all the classes and resources in the folders and calling ./build_native.sh but I get an error because most of the files are not being included in the project. I tried opening the Android.mk and I can see why "LOCAL_SRC_FILES := AppDelegate.cpp \ HelloWorldScene.cpp" are the only files linked. Should I manually modify the make file or can it be automated by some way I don't know? Thank you. UPDATE: I manually added all files and headers to the make file and I get errors linking Box2D or cocosdenshion libraries.

    Read the article

  • Wikipedia A* pathfinding algorithm takes a lot of time

    - by Vee
    I've successfully implemented A* pathfinding in C# but it is very slow, and I don't understand why. I even tried not sorting the openNodes list but it's still the same. The map is 80x80, and there are 10-11 nodes. I took the pseudocode from here Wikipedia And this is my implementation: public static List<PGNode> Pathfind(PGMap mMap, PGNode mStart, PGNode mEnd) { mMap.ClearNodes(); mMap.GetTile(mStart.X, mStart.Y).Value = 0; mMap.GetTile(mEnd.X, mEnd.Y).Value = 0; List<PGNode> openNodes = new List<PGNode>(); List<PGNode> closedNodes = new List<PGNode>(); List<PGNode> solutionNodes = new List<PGNode>(); mStart.G = 0; mStart.H = GetManhattanHeuristic(mStart, mEnd); solutionNodes.Add(mStart); solutionNodes.Add(mEnd); openNodes.Add(mStart); // 1) Add the starting square (or node) to the open list. while (openNodes.Count > 0) // 2) Repeat the following: { openNodes.Sort((p1, p2) => p1.F.CompareTo(p2.F)); PGNode current = openNodes[0]; // a) We refer to this as the current square.) if (current == mEnd) { while (current != null) { solutionNodes.Add(current); current = current.Parent; } return solutionNodes; } openNodes.Remove(current); closedNodes.Add(current); // b) Switch it to the closed list. List<PGNode> neighborNodes = current.GetNeighborNodes(); double cost = 0; bool isCostBetter = false; for (int i = 0; i < neighborNodes.Count; i++) { PGNode neighbor = neighborNodes[i]; cost = current.G + 10; isCostBetter = false; if (neighbor.Passable == false || closedNodes.Contains(neighbor)) continue; // If it is not walkable or if it is on the closed list, ignore it. if (openNodes.Contains(neighbor) == false) { openNodes.Add(neighbor); // If it isn’t on the open list, add it to the open list. isCostBetter = true; } else if (cost < neighbor.G) { isCostBetter = true; } if (isCostBetter) { neighbor.Parent = current; // Make the current square the parent of this square. neighbor.G = cost; neighbor.H = GetManhattanHeuristic(current, neighbor); } } } return null; } Here's the heuristic I'm using: private static double GetManhattanHeuristic(PGNode mStart, PGNode mEnd) { return Math.Abs(mStart.X - mEnd.X) + Math.Abs(mStart.Y - mEnd.Y); } What am I doing wrong? It's an entire day I keep looking at the same code.

    Read the article

  • Do Apple and Google ask for a share if custom payment is done in a free app?

    - by user1590354
    I have a multiplatform game (web/iOS/Android) in the making. In the free version the core game is still fully playable but people who choose to pay will get more social features (and no ads, of course). I was thinking that rather than having a free and a paid version for all the platforms I may release the apps just for free and if the users want more, they have to register and pay a one-time fee (through a payment gateway or PayPal). The extra content would then be available in all the clients they have access to. Theoretically, this means a better value for the players and less maintenance and headache for me (obviously I have to handle all the payment troubles myself). Does it fit into the business model of Apple/Google? Or will they still claim their share of the registration fee?

    Read the article

  • Android Touch Event Collision Detection

    - by chrissb
    I'm relatively new to both Java and Android, so hopefully the problem I'm having is stemming from something pretty minor that I've overlooked. I've got a (very early stage) game that I've started working on, for Android using Java. At this stage, when the user touches the screen, if they touched a point at which there is an enemy, the enemies health is decreased and they become immobile (for the current implementation at least). The issue that I'm having is that the touch detection doesn't always seem to work. I've got a testing sprite set up that goes to the eventX and eventY coordinates of the touch down event, and it always seems to collide with the enemy object. Yet, the enemy doesn't always register as being hit, and sometimes a hit is registered when the sprite indicates the touch coordinates were outside of the enemies bounding box. I realise that this probably doesn't mean much without any code, so here's what I've got so far. Be gentle, as this is literally my first attempt at something more than basic movement etc. First off, the MainGamePanel class registers the touch event, and informs the levelmanager class (which is what I set up to monitor/handle enemies) public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN){ levelManager.handleActionDown((int)event.getX(), (int)event.getY()); targetX=event.getX(); targetY=event.getY(); } if (event.getAction() == MotionEvent.ACTION_MOVE) { //the gestures } if (event.getAction() == MotionEvent.ACTION_UP) { //touch was released } return true; } From there, in the levelmanager class the touch event is passed on to all of the enemies within a list array: public static void handleActionDown(int eventX,int eventY){ hit=false; for (enemy1 en : enemy1array){ en.handleActionDown(eventX, eventY); } } The rest of the collision code is handled within the enemies handleActionDown function: public void handleActionDown(int eventX, int eventY) { if(eventX>this.x-enemy1bitmap.getWidth() && eventX<this.x+enemy1bitmap.getWidth() && eventY>this.y-enemy1bitmap.getHeight() && eventY<this.x+enemy1bitmap.getHeight()){ takeDamage(1); levelmanager.setHit(); } } I should probably be using getWidth()/2 and getHeight()/2 for it to be more accurate, but I expanded the area to test this - although I've noticed no improvement. At this stage, the games detection over whether or not the enemy is hit is spotty at best. Generally it takes two or three attempts before a collision is successfully registered, even though the sprite that is being used for testing and set to the eventX and eventY coordinates always indicates that the collision should have worked. Hopefully someone can steer me in the right direction here, and if more information is needed, ask away! Cheers, -Chris

    Read the article

  • How do I get the source code from a Google Code game project?

    - by BluFire
    I'm trying to get the Hedgewars source code. When I went to the downloads tab, it doesn't specify which is the actual game. I tried downloading it using the SVN Checkout on Tortoise, but it seems like it doesn't work on the browse section of Source. (Hgproject_filesAndroid_buildSDL-android-project) I then proceeded to the wiki but I got stuck at step two because I don't know anything about Mercurial. Some other things I don't know from the wiki is "FreePascal" "Android NDK" and "Tar" files. They are new to me so I am really confused. So my question is, how can I download the source code from Hedge Wars for Android without having to browse the source code inside the source tab?

    Read the article

  • Bitmap font rendering, UV generation and vertex placement

    - by jack
    I am generating a bitmap, however, I am not sure on how to render the UV's and placement. I had a thread like this once before, but it was too loosely worded as to what I was looking to do. What I am doing right now is creating a large 1024x1024 image with characters evenly placed every 64 pixels. Here is an example of what I mean. I then save the bitmap X/Y information to a file (which is all multiples of 64). However, I am not sure how to properly use this information and bitmap to render. This falls into two different categories, UV generation and kerning. Now I believe I know how to do both of these, however, when I attempt to couple them together I will get horrendous results. For example, I am trying to render two different text arrays, "123" and "njfb". While ignoring the texture quality (I will be increasing the texture to provide more detail once I fix this issue), here is what it looks like when I try to render them. http://img64.imageshack.us/img64/599/badfontrendering.png Now for the algorithm. I am doing my letter placement with both GetABCWidth and GetKerningPairs. I am using GetABCWidth for the width of the characters, then I am getting the kerning information for adjust the characters. Does anyone have any suggestions on how I can implement my own bitmap font renderer? I am trying to do this without using external libraries such as angel bitmap tool or freetype. I also want to stick to the way the bitmap font sheet is generated so I can do extra effects in the future. Rendering Algorithm for(U32 c = 0, vertexID = 0, i = 0; c < numberOfCharacters; ++c, vertexID += 4, i += 6) { ObtainCharInformation(fontName, m_Text[c]); letterWidth = (charInfo.A + charInfo.B + charInfo.C) * scale; if(c != 0) { DWORD BytesReq = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, 0, 0, &mat); U8 * glyphImg= new U8[BytesReq]; DWORD r = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, BytesReq, glyphImg, &mat); for (int k=0; k<nKerningPairs; k++) { if ((kerningpairs[k].wFirst == previousCharIndex) && (kerningpairs[k].wSecond == m_Text[c])) { letterBottomLeftX += (kerningpairs[k].iKernAmount * scale); break; } } letterBottomLeftX -= (gm.gmCellIncX * scale); } SetVertex(letterBottomLeftX, 0.0f, zFight, vertexID); SetVertex(letterBottomLeftX, letterHeight, zFight, vertexID + 1); SetVertex(letterBottomLeftX + letterWidth, letterHeight, zFight, vertexID + 2); SetVertex(letterBottomLeftX + letterWidth, 0.0f, zFight, vertexID + 3); zFight -= 0.001f; float BottomLeftX = (F32)(charInfo.bitmapXOrigin) / (float)m_BitmapWidth; float BottomLeftY = (F32)(charInfo.bitmapYOrigin + charInfo.charBitmapHeight) / (float)m_BitmapWidth; float TopLeftX = BottomLeftX; float TopLeftY = (F32)(charInfo.bitmapYOrigin) / (float)m_BitmapWidth; float TopRightX = (F32)(charInfo.bitmapXOrigin + charInfo.B - charInfo.C) / (float)m_BitmapWidth; float TopRightY = TopLeftY; float BottomRightX = TopRightX; float BottomRightY = BottomLeftY; SetTextureCoordinate(TopLeftX, TopLeftY, vertexID + 1); SetTextureCoordinate(BottomLeftX, BottomLeftY, vertexID + 0); SetTextureCoordinate(BottomRightX, BottomRightY, vertexID + 3); SetTextureCoordinate(TopRightX, TopRightY, vertexID + 2); /// index setting letterBottomLeftX += letterWidth; previousCharIndex = m_Text[c]; }

    Read the article

  • Is there a way to legally create a game mod?

    - by Rodrigo Guedes
    Some questions about it: If I create a funny version of a copyrighted game and sell it (crediting the original developers) would it be considered a parody or would I need to pay royalties? If I create a game mod for my own personal use would it be legal? What if I gave it for free to a friend? Is there a general rule about it or it depends on the developer will? P.S.: I'm not talking about cloning games like this question. It's all about a game clearly based on another. Something like "GTA Gotham City" ;) EDIT: This picture that I found over the internet illustrate what I'm talking about: Just in case I was not clear: I never created a mod game. I was just wondering if it would be legally possible before trying to do it. I'm not apologizing piracy. I pay dearly for my games (you guys have no idea how expensive games are in Brazil due to taxes). Once more I say that the question is not about cloning. Cloning is copy something and try to make your version look like a brand new product. Mods are intended to make reference to one or more of its source. I'm not sure if it can be done legally (if I knew I wasn't asking) but I'm sure this question is not a duplicate. Even so, I trust in the moderators and if they close my question I will not be offended - at least I had an opportunity to explain myself and got 1 good answer (by the time I write this, maybe some more will be given later).

    Read the article

  • How to design good & continuous tiles

    - by Mikalichov
    I have trouble designing tiles so that when assembled, they don't look like tiles, but look like an homogeneous thing. For example on the image below: even though the main part of the grass is only one tile, you don't "see" the grid; you know where it is if you look a bit carefully, but it is not obvious. Whereas when I design tiles, you can only see "oh, jeez, 64 times the same tile". A bit like on that image: (taken from a gamedev.stackexchange question, sorry; no critic about the game, but it proves my point, and actually has better tile design that what I manage) I think the main problem is that I design them so they are independent, there is no junction between two tiles if put closed to each other. I think having the tiles more "continuous" would have a smoother effect, but can't manage to do it, it seems overly complex to me. I think it is probably simpler than I think once you know how to do it, but couldn't find a tutorial on that specific point. Is there a known method to design continuous / homogeneous tiles? (my terminology might be totally wrong, don't hesitate to correct me)

    Read the article

  • Beginning android games, 2nd edition engine

    - by Benjamin Stephen
    I recently bought the book from Apress and have worked my way through it, unfortunately, it seems to just be dealing with side scrolling games and not Zelda-like top down games. I was wondering if anyone out there can tell me if it's possible to use their engine to create such a game? It doesn't go into how to build a top down tile map. Using the engine in their book, how can I make a tile map easily that has walls and things like that? Any help would be appreciated. Thanks in advance.

    Read the article

  • Creating a newspaper that effects the game's economy?

    - by zardon
    I am writing a game in Objective C/cocos2d where a newspaper is a central part of what controls or rather effects the game's world economy as well as what a city might do (such as increase X, reduce Y) The newspaper is a bit like a "Chance card" in Monopoly, it has an effect on something. My question is, what is the best way to do write a newspaper that has both a random and specific effect within the game. Would the best strategy be to write out all the things a newspaper can affect, a PLIST of headlines (with placeholders). I think Tiny Tower uses a PLIST of events and it randomly picks an event, but I'm not sure how it actually parses it because certain events do different things. But then how do I parse all the scenarios that a newspaper can deliver? A big switch statement seems very long and complicated to do. I am wondering if there is a simpler way to handle this kind of thing. Related to this is that there might be no news that day and I'm not sure what the newspaper should display, should it just display the last headline? So, in summary. 1) A newspaper generates a headline, it affects different things, such as the world economy, prices, how city reacts 2) I need the newspaper to generate headlines (although there may be days when there are no headlines at all), but I am not sure how to parse it without using a big-ass switch statement. Thanks in advance.

    Read the article

  • Multiple volumetric lights

    - by notabene
    I recently read this GPU GEMS 3 article Volumetric Light Scattering as a Post-Process. I like the idea to add volumetric light property to realtime render i'm working on. Question is will it work for multiple lights? Our renderer uses one render pass per light and uses additive blending to sum incoming light. I'm mostly convinced that it have to work nice. Do you agree? Maybe there can be problem where light rays crosses each other.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >