Search Results

Search found 216 results on 9 pages for 'meshes'.

Page 5/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • a flexible data structure for geometries

    - by AkiRoss
    What data structure would you use to represent meshes that are to be altered (e.g. adding or removing new faces, vertices and edges), and that have to be "studied" in different ways (e.g. finding all the triangles intersecting a certain ray, or finding all the triangles "visible" from a given point in the space)? I need to consider multiple aspects of the mesh: their geometry, their topology and spatial information. The meshes are rather big, say 500k triangles, so I am going to use the GPU when computations are heavy. I tried using arrays with vertices and arrays with indices, but I do not love adding and removing vertices from them. Also, using arrays totally ignore spatial and topological information, which I may need studying the mesh. So, I thought about using custom double-linked list data structures, but I believe doing so will require me to copy the data to array buffers before going on the GPU. I also thought about using BST, but not sure it fits. Any help is appreciated. If I have been too fuzzy and you require other information feel free to ask.

    Read the article

  • Movement on the X an Z axis are combined?

    - by Magicaxis
    This is probably a stupid question, but I'm trying to simply move a 3D object up, down, left, and right (Not forward or backward). The Y axis works fine, but when I increment the object's X position, the object moves BOTH right and backwards! when I decrement X, left and forwards! setPosition(getPosition().X + 2/*times deltatime*/, getPosition().Y, getPosition().Z); I was astonished that XNA doesnt have its own setPosition function, so I made a parent class for all objects with a setPosition and Draw function. Setposition simply edits a variable "mPosition" and passes it to the common draw function: // Copy any parent transforms. Matrix[] transforms = new Matrix[block.Bones.Count]; block.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in block.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(mOrientation.Y)) * Matrix.CreateTranslation(mPosition); effect.View = game1.getView(); effect.Projection = game1.getProjection(); } // Draw the mesh, using the effects set above. mesh.Draw(); } I tried to work it out by attempting to increment and decrement the Z axis, but nothing happens?! So using the X axis changes the objects x and z axis', but changing the Z does nothing. Great. So how do I seperate the X and Z axis movement?

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • Component based game engine issue

    - by Mathias Hölzl
    We are just switching from a hierarchy based game engine to a component based game engine. My problem is that when I load a model which has has a hierarchy of meshes and the way I understand is that a entity in a component based system can not have multiple components of the same type, but I need a "meshComponent" for each mesh in a model. So how could I solve this problem. On this side they implemented a Component based game engine: http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • My grid based collision detection is slow

    - by Fibericon
    Something about my implementation of a basic 2x4 grid for collision detection is slow - so slow in fact, that it's actually faster to simply check every bullet from every enemy to see if the BoundingSphere intersects with that of my ship. It becomes noticeably slow when I have approximately 1000 bullets on the screen (36 enemies shooting 3 bullets every .5 seconds). By commenting it out bit by bit, I've determined that the code used to add them to the grid is what's slowest. Here's how I add them to the grid: for (int i = 0; i < enemy[x].gun.NumBullets; i++) { if (enemy[x].gun.bulletList[i].isActive) { enemy[x].gun.bulletList[i].Update(timeDelta); int bulletPosition = 0; if (enemy[x].gun.bulletList[i].position.Y < 0) { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450); } else { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450) + 4; } GridItem bulletItem = new GridItem(); bulletItem.index = i; bulletItem.type = 5; bulletItem.parentIndex = x; if (bulletPosition > -1 && bulletPosition < 8) { if (!grid[bulletPosition].Contains(bulletItem)) { for (int j = 0; j < grid.Length; j++) { grid[j].Remove(bulletItem); } grid[bulletPosition].Add(bulletItem); } } } } And here's how I check if it collides with the ship: if (ship.isActive && !ship.invincible) { BoundingSphere shipSphere = new BoundingSphere( ship.Position, ship.Model.Meshes[0].BoundingSphere.Radius * 9.0f); for (int i = 0; i < grid.Length; i++) { if (grid[i].Contains(shipItem)) { for (int j = 0; j < grid[i].Count; j++) { //Other collision types omitted else if (grid[i][j].type == 5) { if (enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive) { BoundingSphere bulletSphere = new BoundingSphere(enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].position, enemy[grid[i][j].parentIndex].gun.bulletModel.Meshes[0].BoundingSphere.Radius); if (shipSphere.Intersects(bulletSphere)) { ship.health -= enemy[grid[i][j].parentIndex].gun.damage; enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive = false; grid[i].RemoveAt(j); break; //no need to check other bullets } } else { grid[i].RemoveAt(j); } } What am I doing wrong here? I thought a grid implementation would be faster than checking each one.

    Read the article

  • loading 3d model data into buffers

    - by mulletdevil
    I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure. I am using C++ and DirectX9 Thank you, Mark

    Read the article

  • XNA ModelMesh.Draw vs GraphicsDevice.DrawIndexedPrimitives

    - by cubrman
    I am using XNA 4.0 and I wonder if drawing models with multiple meshes is better by filling the vertex and index buffers first and calling GraphicsDevice.DrawIndexedPrimitives() or by simply using good ol' foreach(...) {ModelMesh.Draw()}. Is it possible to add data to vertex/index buffers at all in order to pack all the models on the scene in them and then call Draw only once per frame? I would appreciate a link to an in-depth explanation. Thanks.

    Read the article

  • Weird appearance for a 3D XNA ground

    - by Belos
    I wanted to add a ground so I can know the position of a helicopter in the world. But the ground appeared in a weird way: http://i.stack.imgur.com/yTSuW.jpg The ground had the following texture: http://i.stack.imgur.com/pdpxB.png EDIT: Sorry, I forgot to post the code: public class ImportModel { public Vector3 Position { get; set; } public Vector3 Rotation { get; set; } public Vector3 Scale { get; set; } Model Model; Matrix[] modeltransforms; GraphicsDevice GraphicDevice; ContentManager Content; BoundingSphere sphere; bool boundingimplemented = false; public ImportModel(string model, GraphicsDevice gd, ContentManager cm, Vector3 position, Vector3 rot, Vector3 sca) { GraphicDevice = gd; Content = cm; Position = position; Rotation = rot; Scale = sca; Model = Content.Load<Model>(model); modeltransforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo(modeltransforms); } public void Draw(Camera camera) { Matrix baseworld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach (ModelMesh mesh in Model.Meshes) { Matrix localworld = modeltransforms[mesh.ParentBone.Index] * baseworld; foreach (ModelMeshPart meshpart in mesh.MeshParts) { BasicEffect effect = (BasicEffect)meshpart.Effect; effect.World = localworld; effect.View = camera.View; effect.Projection = camera.Projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public BoundingSphere BoundingSphere { get { if (!boundingimplemented) { foreach (ModelMesh mesh in Model.Meshes) { BoundingSphere transformed = mesh.BoundingSphere.Transform( modeltransforms[mesh.ParentBone.Index]); sphere = BoundingSphere.CreateMerged(sphere, transformed); } Matrix worldTransform = Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Position); BoundingSphere transforme = sphere; transforme = transforme.Transform(worldTransform); return transforme; } else { Matrix worldTransform = Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Position); BoundingSphere transformed = sphere; transformed = transformed.Transform(worldTransform); return transformed; } } } } Then I call the class from the Game1 class: ImportModel ground = new ImportModel("ground", GraphicsDevice, Content, Vector3.Zero, Vector3.Zero, new Vector3(20f)); EDIT2:This is how the scene looks from top: i.stack.imgur.com/Hs983.jpg

    Read the article

  • UDK: Toggle Material on Brushes?

    - by user50300
    I'm attempting to change / toggle the material that's applied to a BSP brush. I've seen where it's possible on static meshes, but I can't seem to get it to work on brushes. Below is an example scenario. The ground (BSP Brush) is covered in a stone material The player hits a trigger. A new "moss" material is applied to the brush, removing the old stone material. Any assistance you could offer would be greatly appreciated.

    Read the article

  • Unreal Tournament 3 vs UDK: What Should I Choose?

    - by Matt Christian
    Many people in the mod community were very excited to see the release of the Unreal Developer Kit (UDK) a few months ago.  Along with generating excitement into a very dedicated community, it also introduced many new modders into a flourishing area of indie-development.  However, since UDK is free, most beginners jump right into UDK, which is OK though you might just benefit more from purchasing a shelf-copy of Unreal Tournament 3. UDK UDK is a free full version of UnrealEd (the editor environment used to create games like Gears of War 1/2, Bioshock 1/2, and of course Unreal Tournament 3).  The editor gives you all the features of the editor from the shelf-copy of the game plus some refinements in many of the tools.  (One of the first things you'll find about UnrealEd is that it's a collection of tools grouped into the same editor so it really isn't a single 'tool') Interestingly enough, Epic is allowing you to sell any game made in UDK with a few catches.  First off, you must purchase a liscense for your game (which, I THINK is aproximately $99 starting).  Secondly, you must pay 25% of all profits for the first $5,000 of your game revenue to them (about $1250).  Finally, you cannot use any of the 'media' provided in UDK for your game.  UDK provides sample meshes, textures, materials, sounds, and other sample pieces of media pulled (mostly) from Unreal Tournament 3. The final point here will really determine whether you should use UDK.  There is a very small amount of media provided in UDK for someone to go in and begin creating levels without first developing your own meshes, textures, and other media.  Sure, you can slap together a few unique levels, though you will end up finding yourself restriced to the same items over and over and over.  This is absolutely how professional game development is; you are 'given' (typically liscensed or built in-house) an engine/editor and you begin creating all the content for the game and placing it.  UDK is aimed toward those who really want to build their game content from scratch with a currently existing engine.  It is not suited for someone who would like to simply build levels and quick mods without learning external 3D programs and image editing software. Unreal Tournament 3 Unless you have a serious grudge against FPS's, Epic, or your computer sucks, there really is no reason not to own this game for PC.  You can pick it up on Steam or Amazon for around $20 brand new.  Not only are you provided with a full single-player and multiplayer game, but you are given the entire UnrealEd 3.0 including all of the content used to build UT3.  If you want to start building levels and mods quickly for UT3, you should absolutely pick up a shelf-copy. However, as off-the-shelf UT3 is a few years old now, the tools have not been updated for quite a while.  Compared to UDK, the menus are more difficult to navigate through and take more time getting used to.  Since UDK is updated almost every month, there are new inclusions to the editor that may not be in UT3 (including the future addition of 3D!).  I haven't worked enough with shelf UT3 to see if there are more features in UDK or if they both feature the same stuff in different forms, however you should remember that the Unreal Engine 3.0 has undergone numerous upgrades between it's launch and Gears of War 2 (in fact, Epic had a conference to show off what changed just between the Gears of Wars games). Since UT3 has much more core content, someone who wants to focus on level editing or modding the core UT3 game may find their needs better suited with an off-the-shelf copy of UT3.  If that level designer has a team that is generating custom assets, they may be better off with UDK. The choice is now yours...

    Read the article

  • 3D Sphereical Terrain with an 8 mesh sphere. The edges of the mesh are obvioulsy seen and I'm not su

    - by Justin808
    Hi :) I'm working in Unity3D, but my issue is with 3D meshes. I'm hoping someone here can help or point me in the right direction. I have 2 version of code, http://www.pasteit4me.com/695002 (old) and http://www.pasteit4me.com/690003 (new). The old code, makes a single mesh sphere and creates a terrain on it. The new code makes an 8 mesh sphere and creates a terrain on it. On the new version the edges of the meshes are obviously seen and I'm not sure why. It looks like the edges are adjusted no much, almost 2-3 times more than they should have been. GenerateB() in the old code and Generate() in the new code creates the sphere. MakeTerrain() in both create the terrain. If I dont run the MakeTerrain() function the new sphere looks like a solid mesh. I'm not sure where to start looking in the MakeTerrain() function in the new code to solve the issue :-/ Any ideas? An image of the issue is at http://img28.imageshack.us/img28/3784/screenshot20100611at850.png.

    Read the article

  • determining if value is in range with 0=360 degree problem.

    - by Raven
    Hi, I am making a piece of code for DirectX app. It's meaning is to not show faces that are not visible. Normaly it would be just using Z-buffer, but I'm making many moves and rotations of mesh, so I would like to not do them and save computing power. I will describe this on cube. You are looking from the front so you see just one face and you don't need to rotate the 5 that left. If you would have one side of cube from 100*100 meshes, it would be great to not have to turn around 50k meshes that you really don't need. So I have stored X,Y,Z rotation of camera(the Z rotation I'm not using), and also X,Y,Z rotation of faces. In this cube simplified I would see faces that makes this statement true: cRot //camera rotation in degrees oRot //face rotation in degrees if(oRot.x > cRot.x-90 && oRot.x < cRot.x+90 && oRot.y > cRot.y-90 && oRot.y < cRot.y+90) But there comes a problem. If I will rotate arround, the camera can get to value 330 for exapmple. In this state, I would see front and right side of cube. Right side have rotation 270 so that's allright in IF statement. Problem is with 0 rotation of front face, which is also 360 degrees. So my question is how to make this statement to work, because when I use modulo, it will be failing for that right side and in this way it won't work for 0=360.

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Cutting objects and applying texture to cut. Unity3d/C#

    - by Timothy Williams
    Basically what I'm trying to do is figure out how to calculate realtime cutting of objects, and apply a texture to the cut. I found some good scripts, but most of them have been abandoned and aren't really fully working yet. Applying textures: http://forum.unity3d.com/threads/75949-Mesh-Real-Cutting?highlight=mesh+real+cutting Cutting: http://forum.unity3d.com/threads/78594-Object-Cutter Another (Free) Cutter (Also, I'm not entirely sure how this one will handle cutting complex meshes): http://forum.unity3d.com/threads/69992-fake-slicer?p=449114&viewfull=1#post449114 My plan as of right now is to combine links 1 & 2 or 1 & 3 programming wise. What I'm asking here for is any advice on how to advance (links to asset store packages, or other codes to show how to accomplish something complex like this.)

    Read the article

  • Disabling depth write trashes the frame buffer on some GPUs

    - by EboMike
    I sometimes disable depth buffer writing via glDepthMask(GL_FALSE) during the alpha rendering of a frame. That works perfectly fine on some GPUs (like the Motorola Droid's PowerVR), but on the HTC EVO with the Adreno GPU for example, I end up with the frame buffer being complete garbage (I see traces of the meshes I rendered somewhere, but the entire screen is mostly trashed). If I force glDepthMask to be true the entire time, everything works fine. I need glDepthMask to be off during parts of the alpha rendering. What can cause the framebuffer to get destroyed by turning the depth writing off? I do clear the depth buffer initially, and the majority of the screen has pixels rendered with depth writing turned on first before I do additional drawing with it turned off.

    Read the article

  • Entity/Component based engine rendering separation from logic

    - by Denis Narushevich
    I noticed in Unity3D that each gameObject(entity) have its own renderer component, as far I understand, such component handle rendering logic. I wonder if it is a common practice in entity/component based engines, when single entity have renderer components and logic components such as position, behavior altogether in one box? Such approach sound odd to me, in my understanding entity itself belongs to logic part and shouldn't contain any render specific things inside. With such approach it is impossible to swap renderers, it would require to rewrite all that customized renderers. The way I would do it is, that entity would contain only logic specific components, like AI,transform,scripts plus reference to mesh, or sprite. Then some entity with Camera component would store all references to object that is visible to the camera. And in order to render all that stuff I would have to pass Camera reference to Renderer class and render all sprites,meshes of visible entities. Is such approach somehow wrong?

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Xna model parts are overlying others

    - by Federico Chiaravalli
    I am trying to import in XNA an .fbx model exported with blender. Here is my drawing code public void Draw() { Matrix[] modelTransforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo(modelTransforms); foreach (ModelMesh mesh in Model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.World = modelTransforms[mesh.ParentBone.Index] * GameCamera.World * Translation; be.View = GameCamera.View; be.Projection = GameCamera.Projection; } mesh.Draw(); } } The problem is that when I start the game some model parts are overlying others instead of being behind. I've tried to download other models from internet but they have the same problem.

    Read the article

  • Normals vs Normal maps

    - by KaiserJohaan
    I am using Assimp asset importer (http://assimp.sourceforge.net/lib_html/index.html) to parse 3d models. So far, I've simply pulled out the normal vectors which are defined for each vertex in my meshes. Yet I have also found various tutorials on normal maps... As I understand it for normal maps, the normal vectors are stored in each texel of a normal map, and you pull these out of the normal texture in the shader. Why is there two ways to get the normals, which one is considered best-practice and why?

    Read the article

  • How to draw textures on a model

    - by marc wellman
    The following code is a complete XNA 3.1 program almost unaltered to that code skeleton Visual Studio is creating when creating a new project. The only things I have changed are imported a .x model to the content folder of the VS solution. (the model is a simple square with a texture spanning over it - made in Google Sketchup and exported with several .x exporters) in the Load() method I am loading the .x model into the game. The Draw() method uses a BasicEffect to render the model. Except these three things I haven't added any code. Why does the model does not show the texture ? What can I do to make the texture visible ? This is the texture file (a standard SketchUp texture from the palette): And this is what my program looks like - as you can see: No texture! Find below the complete source code of the program AND the complete .x file: namespace WindowsGame1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } Model newModel; /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: usse this.Content to load your game content here newModel = Content.Load<Model>(@"aau3d"); foreach (ModelMesh mesh in newModel.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { meshPart.Effect = new BasicEffect(this.GraphicsDevice, null); } } } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { if (newModel != null) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[newModel.Bones.Count]; newModel.CopyAbsoluteBoneTransformsTo(transforms); foreach (ModelMesh mesh in newModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.TextureEnabled = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(new Vector3(0, 0, 0)); effect.View = Matrix.CreateLookAt(new Vector3(200, 1000, 200), Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), 0.75f, 1.0f, 10000.0f); } mesh.Draw(); } } base.Draw(gameTime); } } } This is the model I am using (.x): xof 0303txt 0032 // SketchUp 6 -> DirectX (c)2008 edecadoudal, supports: faces, normals and textures Material Default_Material{ 1.0;1.0;1.0;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; } Material _Groundcover_RiverRock_4inch_{ 0.568627450980392;0.494117647058824;0.427450980392157;1.0;; 3.2; 0.000000;0.000000;0.000000;; 0.000000;0.000000;0.000000;; TextureFilename { "aau3d.xGroundcover_RiverRock_4inch.jpg"; } } Mesh mesh_0{ 4; -81.6535;0.0000;74.8031;, -0.0000;0.0000;0.0000;, -81.6535;0.0000;0.0000;, -0.0000;0.0000;74.8031;; 2; 3;0,1,2, 3;1,0,3;; MeshMaterialList { 2; 2; 1, 1; { Default_Material } { _Groundcover_RiverRock_4inch_ } } MeshTextureCoords { 4; -2.1168,-3.4022; 1.0000,-0.0000; 1.0000,-3.4022; -2.1168,-0.0000;; } MeshNormals { 4; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000; 0.0000;1.0000;-0.0000;; 2; 3;0,1,2; 3;1,0,3;; } }

    Read the article

  • Retrieving model position after applying modeltransforms in XNA

    - by Glen Dekker
    For this method that the goingBeyond XNA tutorial provides, it would be really convenient if I could retrieve the new position of the model after I apply all the transforms to the mesh. I have edited the method a little for what I need. Does anyone know a way I can do this? public void DrawModel( Camera camera ) { Matrix scaleY = Matrix.CreateScale(new Vector3(1, 2, 1)); Matrix temp = Matrix.CreateScale(100f) * scaleY * rotationMatrix * translationMatrix * Matrix.CreateRotationY(MathHelper.Pi / 6) * translationMatrix2; Matrix[] modelTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(modelTransforms); if (camera.getDistanceFromPlayer(position+position1) > 3000) return; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = modelTransforms[mesh.ParentBone.Index] * temp * worldMatrix; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; } mesh.Draw(); } }

    Read the article

  • How attach a model with another model on a specific bone?

    - by Mehdi Bugnard
    I meet a difficulty attached to a model to another model on a "bone" accurate. I searched several forums but no result. I saw that many people have asked the same question but no real result see no response. Thread found : How to attach two XNA models together? How can I attach a model to the bone of another model? http://stackoverflow.com/questions/11391852/attach-model-xna But I think it is possible. Here is my code example attached a "cube" of the hand of my player private void draw_itemActionAttached(Model modelInUse) { Matrix[] Model1TransfoMatrix = new Matrix[this.player.Model.Bones.Count]; this.player.Model.CopyAbsoluteBoneTransformsTo(Model1TransfoMatrix); foreach (ModelMesh mesh in modelInUse.Meshes) { foreach (BasicEffect effect in mesh.Effects) { Matrix model2Transform = Matrix.CreateScale(1f) * Matrix.CreateFromYawPitchRoll(0, 0, 0); effect.World = model2Transform * Model1TransfoMatrix[0]; //root bone index effect.View = arcadia.camera.View; effect.Projection = arcadia.camera.Projection; } mesh.Draw(); } }

    Read the article

  • Multiple Vertex Buffers per Mesh

    - by Daniel
    I've run into the situation where the size of my mesh with all its vertices and indices, is larger than the (optimal) vertex buffer object upper limit (~8MB). I was wondering if I can sub-divide the mesh across multiple vertex buffers, and somehow retain validity of the indices. Ie a triangle with a indice at the first vertex, and an indice at the last (ie in seperate VBOs). All the while maintaining this within Vertex Array Objects. My thoughts are, save myself the hassle, and for meshes (messes :P) such as this, just use the necessary size ( 8MB); which is what I do at the moment. But ideally my buffer manager (wip) at the moment is using optimal sizes; I may just have to make a special case then... Any ideas? If necessary, a simple C++ code example is appreciated. Note: I have also cross-posted this on stackoverflow, as I was not sure as to which it would be more suitable (its partly a design question).

    Read the article

  • How do GameEngines stop Pixel Seams appearing in adjacent mesh boundaries due to FP imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >