Search Results

Search found 839 results on 34 pages for 'vertex'.

Page 21/34 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • How exactly does XNA's SpriteBatch work?

    - by David Gouveia
    To be more precise, if I needed to recreate this functionality from scratch in another API (e.g. in OpenGL) what would it need to be capable of doing? I do have a general idea of some of the steps, such as how it prepares an orthographic projection matrix and creates a quad for each draw call. I'm not too familiar, however, with the batching process itself. Are all quads stored in the same vertex buffer? Does it need an index buffer? How are different textures handled? If possible I'd be grateful if you could guide me through the process from when SpriteBatch.Begin() is called until SpriteBatch.End(), at least when using the default Deferred mode.

    Read the article

  • How can I determine if a cube is adjacent to another cube, and optimize its buffers if so?

    - by Christian Frantz
    I'm trying to optimize the rendering of a collection of cubes, (based on an answer I was given to another question I asked). I understand the logic behind occlusion culling, but I'm having trouble with the code. When I create a cube, I want to determine if that cube is touching another existing cube, and if so I don't want to generate the redundant data in my vertex or index buffers. I'm planning on making a method that I call from my cube constructor so that everytime I create a cube, these checks are made, and neither occluded face is ever drawn. How would I go about this?

    Read the article

  • Importing 3d model with multiple skeletons

    - by Sweta Dwivedi
    I have created an animated butterfly in 3ds Max and try to export it in ".fbx" format to use in XNA, however as soon as I compile, i get the following Errors: Warning 1 Multiple skeletons were found in the file. The first skeleton, named "Left.Wing" has been moved to be a child of the scene root. The other, "Right.Wing", will be ignored. Fragment identifier "Right.Wing". Error 2 Vertex is bound to bone "Right.Wing", but this bone is not present in the skeleton. Which is confusing since I have the bone Right.Wing . . and I use it to animate the butterfly I have seen a few possible solution for Blender but none for 3Ds max it would be really helpful if someone could help me out with this

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • Dynamic Quad/Oct Trees

    - by KKlouzal
    I've recently discovered the power of Quadtrees and Octrees and their role in culling/LOD applications, however I've been pondering on the implementations for a Dynamic Quad/Oct Tree. Such tree would not require a complete rebuild when some of the underlying data changes (Vertex Data). Would it be possible to create such a tree? What would that look like? Could someone point me in the correct direction to get started? The application here would, in my scenario, be used for a dynamically changing spherical landscape with over 10,000,000 verticies. The use of Quad/Oct Trees is obvious for Culling & LOD as well as the benefits from not having to completely recompute the tree when the underlying data changes.

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Sharing VBO with multiple objects and fixed size buffer data

    - by Mark Ingram
    I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data.

    Read the article

  • How to modify VBO data

    - by Romeo
    I am learning LWJGL so i can start working on my game. In order to learn LWJGL I got the idea to implement the map builder so I can get comfortable with graphics programming. Now, for the map creation tool I need to draw new elements or draw the old one's with different coordinates. Let me explain this: My game will be a 2D scroller. The map will be consisting of multiple rectangles ( 2 strip triangles). When I click my left-mouse button i want to start the rectangle and when I release it I want to stop the rectangle bottom-right at that position. As I want to use VBOs I want to know how to modify data inside the VBO based on user input. Should i have a copy of a vertex array and then add the whole array to the VBO at each user input? How is usually implemented the VBO update?

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Need some help implementing VBO's with Frustum Culling

    - by Isracg
    i'm currently developing my first 3D game for a school project, the game world is completely inspired by minecraft (world completely made out of cubes). I'm currently seeking to improve the performance trying to implement vertex buffer objects but i'm stuck, i already have this methods implemented: Frustum culling, only drawing exposed faces and distance culling but i have the following doubts: I currently have about 2^24 cubes in my world, divided in 1024 chunks of 16*16*64 cubes, right now i'm doing immediate mode rendering, which works well with frustum culling, if i implement one VBO per chunk, do i have to update that VBO each time i move the camera (to update the frustum)? is there a performance hit with this? Can i dynamically change the size of each VBO? of do i have to make each one the biggest possible size (the chunk completely filled with objects)?. Would i have to keep each visited chunk in memory or could i efficiently remove that VBO and recreated it when needed?.

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • Blender 2.64, what are the actual hot-keys for certain actions

    - by Shivan Dragon
    I know this sounds mega lame but I've looked for hotkeys for certain actions, first in the appliation's User Settings (where I didn't find them) then in the official documentation (where I did find some of them but they're not the right ones): http://wiki.blender.org/index.php/Doc:2.4/Manual/3D_interaction/Transform_Control/Manipulators (Ctrl - Alt - S is recommended for Scale, but instead it opens the Save As... window - I think these changed in the latest versions, but they forgot to update the docs) So then, what are the hot keys for: selecting translate manipulator selecting rotate manipulator selecting scale manipulator In Edit mode: select vertex (editing) select edges (editing) select faces (editing) thanks.

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • Bad texture on model with different GPU

    - by Pacha
    I have some kind of distortion on the texture of my 3D model. It works perfectly well on an AMD GPU, but when testing on a integrated Intel HD graphics card it has a weird issue. I don't have a problem with the rest of my entities as they are not scaled. The models with the problems are scaled, as my engine supports different sizes for the platforms. I am using Ogre3D as rendering engine, and GLSL as shader language. Vertex shader: #version 120 varying vec2 UV; void main() { UV = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment shader: #version 120 varying vec2 UV; uniform sampler2D diffuseMap; void main(void) { gl_FragColor = texture(diffuseMap, UV); } Screenshot (the error is on the right and left side, the top and bottom part are rendered perfectly well):

    Read the article

  • Using normals in DirectX 10

    - by Dave
    I've got a working OBJ loader that loads vertices, indices, texture coordinates, and normals. As of right now it doesn't process texture coordinates or normals but it stores them in arrays and creates a valid mesh with the vertices and indices. Now I am trying to figure out how can I make the shader use the correct normal in the array for the current vertex if I can't setnormals() to my mesh. If I were to just use an index in my array of normals corresponding to the index in the vertices, how would I retrieve the current index the shader is processing? BTW: I am trying to write a blinn-phong shader technique. Also when I create the input layout and I've added the semantic NORMAL to it, how would I list the multiple semantics in that single parameter? Would I just separate it with a space? PS: If you need to see any code, just let me know.

    Read the article

  • InputLayout handling

    - by Kikaimaru
    Where are you supposed to store InputLayout? Suppose i have some basic structure like. class Mesh { List<MeshPart> MeshParts } class MeshPart { Effect Effect; VertexBufferBinding VertexBuffer; ... } Where should I store input layout? It's a connection between vertex buffer and specific pass, i can live with just 1 pass but I still have diffferent techniques so i need at least an array with some connection to effecttechniques, but I would appriciate something not crazy like dictionary. I could also create wrapper for Effect and EffectTechnique, but there must be some normal solution.

    Read the article

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • Can GJK be used with the same "direction finding method" every time?

    - by the_Seppi
    In my deliberations on GJK (after watching http://mollyrocket.com/849) I came up with the idea that it ins not neccessary to use different methods for getting the new direction in the doSimplex function. E.g. if the point A is closest to the origin, the video author uses the negative position vector AO as the direction in which the next point is searched. If an edge (with A as an endpoint) is closest, he creates a normal vector to this edge, lying in the plane the edge and AO form. If a face is the feature closest to the origin, he uses even another method (which I can't recite from memory right now) However, while thinking about the implementation of GJK in my current came, I noticed that the negative direction vector of the newest simplex point would always make a good direction vector. Of course, the next vertex found by the support function could form a simplex that less likely encases the origin, but I assume it would still work. Since I'm currently experiencing problems with my (yet unfinished) implementation, I wanted to ask whether this method of forming the direction vector is usable or not.

    Read the article

  • Having trouble's understanding NIF model file format?

    - by NoobScratcher
    I'm attempting too develop a 3rd party application to make it easy to import 3d model part's into my mod for skyrim the plan was to have a fileviewer and preview window of the nif model but since , I don't know what the NIF file format actually is or where to get the vertex data from it or the hole nine yards of parsing a text file in detail I'm at a lost what to do. I'm very good at C++ but not at this super over complicated file formats , id much prefer .obj over the nif file format specification here -- http://niftools.sourceforge.net/doc/nif/index.html If someone could help me in understanding the file format in a natural and simple way and the exact parsing needed to create the 3D Model in the frustum and a explanation on how you figured that out would be happy to know. I use cygwin , notepad++ , win32 7

    Read the article

  • OpenGL ES 2.0. Sprite Sheet Animation

    - by Project Dumbo Dev
    I've found a bunch of tutorials on how to make this work on Open GL 1 & 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit: Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas/batch and make a FBO/loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions.

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • Why does this exported cube have too many vertices?

    - by Joewsh
    I'm trying to export md5mesh models. Just as a test I decided to export a simple cube (i.e. with 8 vertices). When I opened the .md5mesh file it lists the following: numverts 24 numtris 12 numweights 24 Obviously the number of triangles makes sense: 6 faces * 2 to triangulate = 12. The model only has one bone so again it even makes sense that there is one weight for each vertex. The question is though, why is the file listing 24 vertices? Is the problem the exporter or is this normal for md5mesh's? Is it something that you have to rectify when you come to parsing the file in engine? I don't want to be parsing or drawing duplicated vertices without reason. I'm guessing it's something to do with shading and normals. Is it a case of listing each vert 3 times, one for each facing normal?

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >