Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 507/1414 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Box2D physics editor for complex bodies

    - by Paul Manta
    Is there any editor out there that would allow me to define complex entities, with joins connecting their multiple bodies, instead of regular single body entities? For example, an editor that would allow me to 'define' a car as having a main body with two circles as wheels, connected through joints. Clarification: I realize I haven't been clear enough about what I need. I'd like to make my engine data-driven, so all entities (and therefore their Box2D bodies) should be defined externally, not in code. I'm looking for a program like Code 'N' Web's PhysicsEditor, except that one only handles single body entities, no joints or anything like that. Like PhysicsEditor, the program should be configurable so that I can save the data in whatever format I want to. Does anyone know of any such software?

    Read the article

  • Recommended method towards making custom maps for a 2d game?

    - by Qasim
    I am planning on making a 2D game, however different from my last personal projects I want this one to have enhanced graphics, with custom-designed levels. My previous 2d platformers were tile-based, in which I made a map editor for to create levels. However, I am wondering the best way to implement custom designed maps? For say, some grass is a litter higher than others, flowers here and there, cool drawings and structures along the way, etc. instead of just the same old tiles over and over again. I am thinking but I just can't grasp the idea of how to implement it. I have seen it done in other games and am interested to see how they accomplish it, but can't get my hands on some source code. :(

    Read the article

  • Server-side Input

    - by Thomas
    Currently in my game, the client is nothing but a renderer. When input state is changed, the client sends a packet to the server and moves the player as if it were processing the input, but the server has the final say on the position. This generally works really well, except for one big problem: falling off edges. Basically, if a player is walking towards an edge, say a cliff, and stops right before going off the edge, sometimes a second later, he'll be teleported off of the edge. This is because the "I stopped pressing W" packet is sent after the server processes the information. Here's a lag diagram to help you understand what I mean: http://i.imgur.com/Prr8K.png I could just send a "W Pressed" packet each frame for the server to process, but that would seem to be a bandwidth-costly solution. Any help is appreciated!

    Read the article

  • Incomplete mesh using DrawIndexedPrimitives after rotating mesh

    - by user1278255
    Through help on this site I was able to draw the triangles of an unrotated, nonscaled nontransformed mesh created in Blender and exported to OBJ, accurately imported through Assimp and rendered in XNA Graphics. However after applying rotation on a single axis in Blender(Z) and adding materials(I wanted to test loading of materials through Assimp) the same mesh appears incomplete. Is something wrong with my view matrix or is it something else? This is what the unrotated mesh looks like: http://www.4shared.com/photo/qXNUSvxtba/okcube.html Here is the rotated mesh: http://www.4shared.com/photo/HAys2rWvba/badcube.html Camera, View and Projection are defined as follows: cameraPos = new Vector3(0, 5, 9); viewMatrix = Matrix.CreateLookAt(cameraPos, new Vector3(0, 0, 1), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, device.Viewport.AspectRatio, 1.0f, 200.0f); Rendering is done through this code: device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect = new BasicEffect(GraphicsDevice); effect.VertexColorEnabled = true; effect.View = viewMatrix; effect.Projection = projectionMatrix; effect.World = Matrix.Identity; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(Microsoft.Xna.Framework.Graphics.PrimitiveType.TriangleList, 0, 0, oScene.Meshes[0].VertexCount, 0, mMesh.FaceCount); } base.Draw(gameTime);

    Read the article

  • Board Game Design in Cocos2d

    - by object2.0
    Hi folks i am going to start a chess like board game. and for that i have reviewed a number to things available. one is http://www.mapeditor.org/ , using which you can create a grid base games. another option is geekgameboard for iphone available at http://mooseyard.lighthouseapp.com/projects/23201-geekgameboard now i want your expert opinion that would it be better to make a game in cocos2d using the first option or the second option? both looks promising to me and give good control over board design. ps: sorry for duplicates, i found about the http://gamedev.stackexchange.com/ lately after posting it on stackexchange. so i am just posting it here again as i feel its more relevant board.

    Read the article

  • Simplest way to use Steam Leaderboards from C# [on hold]

    - by Miau
    We are about to integrate steamworks for leaderboards and achievements into our game. I see there are many open and closed source libraries that can be used to use SteamWorks from C#. Rolling our own wrapper can be done, but if the other libraries are reliable then it would be better to use and perhaps contribute back if we see any obvious gaps. Have you used any and if so what was your experience with the different libraries? Specifically for Leaderboards and achievements The ones I found are: SteamWorks.net Steam4Net Ludosity (can be used outside of Unity apparently)

    Read the article

  • Engine Rendering pipeline : Making shaders generic

    - by fakhir
    I am trying to make a 2D game engine using OpenGL ES 2.0 (iOS for now). I've written Application layer in Objective C and a separate self contained RendererGLES20 in C++. No GL specific call is made outside the renderer. It is working perfectly. But I have some design issues when using shaders. Each shader has its own unique attributes and uniforms that need to be set just before the main draw call (glDrawArrays in this case). For instance, in order to draw some geometry I would do: void RendererGLES20::render(Model * model) { // Set a bunch of uniforms glUniformMatrix4fv(.......); // Enable specific attributes, can be many glEnableVertexAttribArray(......); // Set a bunch of vertex attribute pointers: glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, m->pCoords); // Now actually Draw the geometry glDrawArrays(GL_TRIANGLES, 0, m->vertexCount); // After drawing, disable any vertex attributes: glDisableVertexAttribArray(.......); } As you can see this code is extremely rigid. If I were to use another shader, say ripple effect, i would be needing to pass extra uniforms, vertex attribs etc. In other words I would have to change the RendererGLES20 render source code just to incorporate the new shader. Is there any way to make the shader object totally generic? Like What if I just want to change the shader object and not worry about game source re-compiling? Any way to make the renderer agnostic of uniforms and attributes etc?. Even though we need to pass data to uniforms, what is the best place to do that? Model class? Is the model class aware of shader specific uniforms and attributes? Following shows Actor class: class Actor : public ISceneNode { ModelController * model; AIController * AI; }; Model controller class: class ModelController { class IShader * shader; int textureId; vec4 tint; float alpha; struct Vertex * vertexArray; }; Shader class just contains the shader object, compiling and linking sub-routines etc. In Game Logic class I am actually rendering the object: void GameLogic::update(float dt) { IRenderer * renderer = g_application->GetRenderer(); Actor * a = GetActor(id); renderer->render(a->model); } Please note that even though Actor extends ISceneNode, I haven't started implementing SceneGraph yet. I will do that as soon as I resolve this issue. Any ideas how to improve this? Related design patterns etc? Thank you for reading the question.

    Read the article

  • How does one escape the GPL?

    - by tehtros
    DISCLAIMER I don't pretend to know anything about licensing. In fact, everything I say below may be completely false! Backstory: Recently, I've been looking for a decent game engine, and I think I've found one that I really like, Cafu Engine. However, they have a dual licensing plan, where everything you make with the engine is forced under GPL, unless you pay for a commercial license. I'm not saying that it's a bad engine, they even say that they are very relaxed about the licensing fees. However, the fact that it even involves the GPL scares me. So my question is basicly, how does one escape the GPL. Here's an example: The id Tech engine, also known as the Quake engine, or the Doom engine, was the base for the popular Source engine. However, the id Tech engine has been released under the GPL, and the Source engine is proprietary. Did Valve get a different license? Or did they do something to escape the GPL? Is there a way to escape the GPL? Or, if you use GPL'd source code as a base for another project, are you forced to use the GPL, and make your source code available to the world. Could some random person take the id Tech engine, modify it past the point of recognition, then use it as a proprietary engine for commercial products? Or are they required to make it open source. One last thing, I generally have no problem what-so-ever with open source. However I feel that open source has it's place, but that is not in the bushiness world.

    Read the article

  • Moving camera, or camera with discrete "screens"?

    - by Jacob Millward
    I'm making a game with a friend, but having trouble deciding on a camera style. The basic idea for the game, is having a randomly generated 2-dimensional world, with settlements in it. These settlements would have access to different resources, and it would be the job of the player to create bridges and ladders and links between these villages so they can trade. The player would advance personally by getting better gear, fighting monsters and looking for materials in the world, in order to craft and trade them at the settlements. My friend wants to use an old-style camera, where the world is split into a discrete number of screens that the player moves between. Similar to early Zelda dungeons, or Knytt Stories. This is opposite to me, as I want a standard camera that follows the player around as I feel the split-screen style camera limits the game. Can anyone argue the case either way? We've hit a massive roadblock here and can't seem to get past it.

    Read the article

  • Need ideas for an algorithm to draw irregular blotchy shapes

    - by Yttermayn
    I'm looking to draw irregular shapes on an x,y grid, and I'd like to come up with a simple, fast method if possible. My only idea so far is to draw a bunch of circles of random sizes very near each other, but at a random distance apart from a more or less central coordinate, then fill in any blank spaces. I realize this is a clunky, inelegant method, hopefully it will give you a rough idea of the kinds of rounded, random blotchy shapesI'm shooting for. Please suggest methods to accomplish this, I'm not so much interested in code. I can noodle that part out myself. Thanks!

    Read the article

  • Creating a newspaper that effects the game's economy?

    - by zardon
    I am writing a game in Objective C/cocos2d where a newspaper is a central part of what controls or rather effects the game's world economy as well as what a city might do (such as increase X, reduce Y) The newspaper is a bit like a "Chance card" in Monopoly, it has an effect on something. My question is, what is the best way to do write a newspaper that has both a random and specific effect within the game. Would the best strategy be to write out all the things a newspaper can affect, a PLIST of headlines (with placeholders). I think Tiny Tower uses a PLIST of events and it randomly picks an event, but I'm not sure how it actually parses it because certain events do different things. But then how do I parse all the scenarios that a newspaper can deliver? A big switch statement seems very long and complicated to do. I am wondering if there is a simpler way to handle this kind of thing. Related to this is that there might be no news that day and I'm not sure what the newspaper should display, should it just display the last headline? So, in summary. 1) A newspaper generates a headline, it affects different things, such as the world economy, prices, how city reacts 2) I need the newspaper to generate headlines (although there may be days when there are no headlines at all), but I am not sure how to parse it without using a big-ass switch statement. Thanks in advance.

    Read the article

  • How to draw a global day night curve

    - by Lumis
    I see many applications which have world-clock map, and I would like to make my own to enhance some of my mobile apps. I wonder if anybody has any knowledge where to start, how to draw a curved shadow representing the dawn and the sunset on the globe. See the example: http://aa.usno.navy.mil/imagery/earth/map?year=2012&month=6&day=19&hour=14&minute=47 I think that this curve goes up and down and creates an artic day/night etc Perhaps there is some acceptable approximation formula without a need to load data for each our and each global parallel and meridian...

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Push or Pull Input Data In the Game Logic?

    - by Qua
    In the process of preparing my game for networking I'm adding a layer of seperation between the physical input (mouse/keyboard) and the actual game "engine"/logic. All input that has any relation to the game logic is wrapped inside action objects such as BuildBuildingAction. I was thinking of having an action processing layer that would determine what to do with the input. This layer could then be set up to either just pass the actions locally to the game engine or send it via sockets to the network server depending on whether the game was single- or multiplayer. In network games it would make sense that the player's actions should be sent to the server, but should the game logic be pulling (polling?) the data through some sort of interface or should the action processing layer be adding the actions to an input queue in the game logic code?

    Read the article

  • Why can we recognize game engines?

    - by Bart van Heukelom
    About many games you can say "oh that's the Unreal engine for sure", "this was made by upgrading GTA 4", etc. We can often recognize the engine used for a game just by looking at its graphics (disregarding menus and such). I'm wondering, why is this? All game engines use the same 3D rendering technology that we all use, and the different games usually have a distinct art style, so what's left to recognize?

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

  • Updating scene graph in multithreaded game

    - by user782220
    In a game with a render thread and a game logic thread the game logic thread needs to update the scene graph used by the render thread. I've read about ideas such as a queue of updates. Can someone describe to a newbie at scene graphs what kind of interface the scene graph exports. Presumably it would be rather complicated. So then how does a queue of updates get implemented in C++ in a way that can handle the complexity of the interface of the scene graph while also being type safe and efficient. Again I'm a newbie at scene graphs and C++.

    Read the article

  • Why did the old 3D games have "jittery" graphics?

    - by dreta
    I've been playing MediEvil lately and it got me wondering, what causes some of the old 3D games have "flowing" graphics when moving? It's present in games like Final Fantasy VII, MediEvil, i remember Dungeon Keeper 2 having the same thing in zoom mode, however f.e. Quake 2 didn't have this "issue" and it's just as old. The resolution doesn't seem to be the problem, everything is rendered perfectly fine when you stand still. So is the game refreshing slowly or it's something to do with buffering?

    Read the article

  • XNA - Moving Background Calculations

    - by Jesse Emond
    Hi, My question is relatively hard to explain(for me, at least), so I'll go one step at a time and just tell me in the comments if it's not clear enough. So I'm making a "Defend Your Castle" type 2D game, where two players own a castle and create units that will move horizontally to try to destroy the opponent's base. Here's a screenshot of the game: The distance between both castles is much bigger in a real game though, bigger than the screen's width actually. Because the distance is bigger than the screen's width, I had to implement a simple 2D camera: Camera2D, which only holds a Location Vector2 (and I always make sure this camera is within the field area). Then, I just move all the game elements(castles, units, health bars) by that location, so that if a unit is at (5, 0), and the camera's location is (5, 0), then the unit's position will be moved by 5 units to the left, making it (0, 0) on the screen. At first, I simply used a static background with mountains and clouds(yeah, those are supposed to be mountains and clouds). Obviously, this looked awful: when you moved the camera, the background would stay immobile. Instead, I'd like to make a moving background, kind of a "scrolling" one. But rather than making a background with the same width as the distance between the castles, I'd like to make one that is a little bit smaller(but still bigger than the screen's width). I thought this would create an effect of "distance" with the background(but it might just look awful, too). Here's the background I'm testing with: I tried different ways, but none of them seems to work. I tried this: float backgroundFieldRatio = BackgroundTexture.Width / fieldWidth;//find the ratio between the background and the field. float backgroundPositionX = -cam.Location.X * backgroundFieldRatio;//move the background to the left When I run this with fieldWith = 1600, BackgroundTexture.Width = 1500 and while looking at the rightmost area, the background is offset to the left by a too big amount, and we can see the black clear color in the back, as you can see here: I hope I explained properly what I'm trying to achieve. Thank you for your time. Note: I didn't know what to look for on Google, so I thought I'd ask here.

    Read the article

  • C# creating a simple snake game

    - by Guy David
    I was thinking about creating a snake game with C#, so I ran ideas in my head, and some problems came up. How can I track and output in the correct location the blocks that run after the snake's head? If the snake is built of five blocks, and the user starts going in a circle, how can I print the snake body in the right location? Also, how can I create an action that will run on the background, which will move the snake forward, no matter what the user does? What structure should my code have? (code design structure) This should be a console application, since it's the only framework I am familiar with. I am not looking for finished code, since I want to really understand how it should work.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Long delays in Unity3D substance generation

    - by Josh Buhler
    Currently working on an iOS/Android project in Unity3d, and we're seeing some incredibly long times for generating substances between testing runs. We can run the game, but once we shut down the playback, Unity begins to re-import all off the substances built using Substance Designer. As we've got a lot of these in our game, it's starting to lead to 5 minute delays between testing runs just to test a small change. Any suggestions or parameters we should check that could possibly prevent Unity from needing to regenerate these substances every time? Shouldn't it be caching these things somewhere?

    Read the article

  • Should I always be checking every neighbor when building voxel meshes?

    - by Raven Dreamer
    I've been playing around with Unity3d, seeing if I can make a voxel-based engine out of it (a la Castle Story, or Minecraft). I've dynamically built a mesh from a volume of cubes, and now I'm looking into reducing the number of vertices built into each mesh, as right now, I'm "rendering" vertices and triangles for cubes that are fully hidden within the larger voxel volume. The simple solution is to check each of the 6 directions for each cube, and only add the face to the mesh if the neighboring voxel in that direction is "empty". Parsing a voxel volume is BigO(N^3), and checking the 6 neighbors keeps it BigO(7*N^3)-BigO(N^3). The one thing this results in is a lot of redundant calls, as the same voxel will be polled up to 7 times, just to build the mesh. My question, then, is: Is there a way to parse a cubic volume (and find which faces have neighbors) with fewer redundant calls? And perhaps more importantly, does it matter (as BigO complexity is the same in both cases)?

    Read the article

  • Importing a windows project into android using cocos2d-x

    - by Ef Es
    What I am trying to do today is to import a full project to Android, but no tutorials are available for that that I have seen. My approach was to create a new android project, copy all the classes and resources in the folders and calling ./build_native.sh but I get an error because most of the files are not being included in the project. I tried opening the Android.mk and I can see why "LOCAL_SRC_FILES := AppDelegate.cpp \ HelloWorldScene.cpp" are the only files linked. Should I manually modify the make file or can it be automated by some way I don't know? Thank you. UPDATE: I manually added all files and headers to the make file and I get errors linking Box2D or cocosdenshion libraries.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >