Search Results

Search found 1273 results on 51 pages for 'vertex shader'.

Page 30/51 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Finding the Reachability Count for all vertices of a DAG

    - by ChrisH
    I am trying to find a fast algorithm with modest space requirements to solve the following problem. For each vertex of a DAG find the sum of its in-degree and out-degree in the DAG's transitive closure. Given this DAG: I expect the following result: Vertex # Reacability Count Reachable Vertices in closure 7 5 (11, 8, 2, 9, 10) 5 4 (11, 2, 9, 10) 3 3 (8, 9, 10) 11 5 (7, 5, 2, 9, 10) 8 3 (7, 3, 9) 2 3 (7, 5, 11) 9 5 (7, 5, 11, 8, 3) 10 4 (7, 5, 11, 3) It seems to me that this should be possible without actually constructing the transitive closure. I haven't been able to find anything on the net that exactly describes this problem. I've got some ideas about how to do this, but I wanted to see what the SO crowd could come up with.

    Read the article

  • OpenGL Color Interpolation across vertices

    - by gutsblow
    Right now, I have more than 25 vertices that form a model. I want to interpolate color linearly between the first and last vertex. The Problem is when I write the following code glColor3f(1.0,0.0,0.0); vertex3f(1.0,1.0,1.0); vertex3f(0.9,1.0,1.0); . .`<more vertices>; glColor3f(0.0,0.0,1.0); vertex3f(0.0,0.0,0.0); All the vertices except that last one are red. Now I am wondering if there is a way to interpolate color across these vertices without me having to manually interpolate color natively (like how opengl does it automatically) at each vertex since, I will be having a lot more number of colors at various vertices. Any help would be extremely appreciated. Thank you!

    Read the article

  • Installing my sdist from PyPI puts the files in the wrong places

    - by Tartley
    Hey. My problem is that when I upload my Python package to PyPI, and then install it from there using pip, my app breaks because it installs my files into completely different locations than when I simply install the exact same package from a local sdist. Installing from the local sdist puts files on my system like this: /Python27/ Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) data/ (images and shader source) doc/ (html) examples/ (.py scripts that use the library) gloopy/ (source) This is much as I'd expect, and works fine (e.g. my source can find my data dir, because they lie next to each other, just like they do in development.) If I upload the same sdist to PyPI and then install it from there, using pip, then things look very different: /Python27/ data/ (images and shader source) doc/ (html) Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) gloopy/ (source files) examples/ (.py scripts that use the library) This doesn't work at all - my app can't find its data files, plus obviously it's a mess, polluting the top-level /python27 directory with all my junk. What am I doing wrong? How do I make the pip install behave like the local sdist install? Is that even what I should be trying to achieve? Details I have setuptools installed, and also distribute, and I'm calling distribute_setup.use_setuptools() WindowsXP, Python2.7. My development directory looks like this: /gloopy /data (image files and GLSL shader souce read at runtime) /doc (html files) /examples (some scripts to show off the library) /gloopy (the library itself) My MANIFEST.in mentions all the files I want to be included in the sdist, including everything in the data, examples and doc directories: recursive-include data *.* recursive-include examples *.py recursive-include doc/html *.html *.css *.js *.png include LICENSE.txt include TODO.txt My setup.py is quite verbose, but I guess the best thing is to include it here, right? I also includes duplicate references to the same data / doc / examples directories as are mentioned in the MANIFEST.in, because I understand this is required in order for these files to be copied from the sdist to the system during install. NAME = 'gloopy' VERSION= __import__(NAME).VERSION RELEASE = __import__(NAME).RELEASE SCRIPT = None CONSOLE = False def main(): import sys from pprint import pprint from setup_utils import distribute_setup from setup_utils.sdist_setup import get_sdist_config distribute_setup.use_setuptools() from setuptools import setup description, long_description = read_description() config = dict( name=name, version=version, description=description, long_description=long_description, keywords='', packages=find_packages(), data_files=[ ('examples', glob('examples/*.py')), ('data/shaders', glob('data/shaders/*.*')), ('doc', glob('doc/html/*.*')), ('doc/_images', glob('doc/html/_images/*.*')), ('doc/_modules', glob('doc/html/_modules/*.*')), ('doc/_modules/gloopy', glob('doc/html/_modules/gloopy/*.*')), ('doc/_modules/gloopy/geom', glob('doc/html/_modules/gloopy/geom/*.*')), ('doc/_modules/gloopy/move', glob('doc/html/_modules/gloopy/move/*.*')), ('doc/_modules/gloopy/shapes', glob('doc/html/_modules/gloopy/shapes/*.*')), ('doc/_modules/gloopy/util', glob('doc/html/_modules/gloopy/util/*.*')), ('doc/_modules/gloopy/view', glob('doc/html/_modules/gloopy/view/*.*')), ('doc/_static', glob('doc/html/_static/*.*')), ('doc/_api', glob('doc/html/_api/*.*')), ], classifiers=[ 'Development Status :: 1 - Planning', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Python :: 2.7', ], # see classifiers http://pypi.python.org/pypi?:action=list_classifiers ) config.update(dict( author='Jonathan Hartley', author_email='[email protected]', url='http://bitbucket.org/tartley/gloopy', license='New BSD', ) ) if '--verbose' in sys.argv: pprint(config) setup(**config) if __name__ == '__main__': main()

    Read the article

  • count of distinct acyclic paths from A[a,b] to A[c,d]?

    - by Sorush Rabiee
    I'm writing a sokoban solver for fun and practice, it uses a simple algorithm (something like BFS with a bit of difference). now i want to estimate its running time ( O and omega). but need to know how to calculate count of acyclic paths from a vertex to another in a network. actually I want an expression that calculates count of valid paths, between two vertices of a m*n matrix of vertices. a valid path: visits each vertex 0 or one times. have no circuits for example this is a valid path: but this is not: What is needed is a method to find count of all acyclic paths between the two vertices a and b. comments on solving methods and tricks are welcomed.

    Read the article

  • HLSL: Enforce Constant Register Limit at Compile Time

    - by Andrew Russell
    In HLSL, is there any way to limit the number of constant registers that the compiler uses? Specifically, if I have something like: float4 foobar[300]; In a vs_2_0 vertex shader, the compiler will merrily generate the effect with more than 256 constant registers. But a 2.0 vertex shader is only guaranteed to have access to 256 constant registers, so when I try to use the effect, it fails in an obscure and GPU-dependent way at runtime. I would much rather have it fail at compile time. This problem is especially annoying as the compiler itself allocates constant registers behind the scenes, on top of the ones I am asking for. I have to check the assembly to see if I'm over the limit. Ideally I'd like to do this in HLSL (I'm using the XNA content pipeline), but if there's a flag that can be passed to the compiler that would also be interesting.

    Read the article

  • directx 9 hlsl vs. directx 10 hlsl : syntex the same.

    - by numerical25
    For the past month or so I been busting my behind trying to learn directx. So I been mixing back back and fourth from directx 9 to directx 10. One of the major changes I've seen in the two is how to process vectors in the graphics card. one of the drastic changes I notice is how you get the gpu to recognize your structs. In directx 9 you define the Flexible Vertex Formats your typical set up would be like this... #define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE) in directx 10 I believe the equivalent is the input vertex description D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; I notice in directx 10. it is more descriptive. besides this, what are some of the drastic changes made. and is the hlsl syntax the same for both.

    Read the article

  • DirectX 9 HLSL vs. DirectX 10 HLSL: syntax the same?

    - by numerical25
    For the past month or so, I have been busting my behind trying to learn DirectX. So I've been mixing back back and forth between DirectX 9 and 10. One of the major changes I've seen in the two is how to process vectors in the graphics card. One of the drastic changes I notice is how you get the GPU to recognize your structs. In DirectX 9, you define the Flexible Vertex Formats. Your typical set up would be like this: #define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE) In DirectX 10, I believe the equivalent is the input vertex description: D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; I notice in DirectX 10 that it is more descriptive. Besides this, what are some of the drastic changes made, and is the HLSL syntax the same for both?

    Read the article

  • Create a bubble effect on a grid OpenGL-ES

    - by Charles Michael
    Hi there. I have created a grid with 40 x 40 vertex3D (small but useful) I can pick a single vertex out of that grid by simply calling a function with the position array[X][Y], And therefore neighbors too. How can I raise up neighbor vertex Z value so they kinda look like a bubble or sphere kind of thingy? My first tough was to use: Neighbor_vertex.Z = sin(PI/4 * 1 - ( 1/ distance_between_Neighbor_and_Pivot) ) * desired_Max_Height But all I got is something like a wave.... and I would like to have a bubble or Sphere like shape. THX dudes and dudettes

    Read the article

  • OpenGL + cgFX Alpha Blending failure

    - by dopplex
    I have a shader that needs to additively blend to its output render target. While it had been fully implemented and working, I recently refactored and have done something that is causing the alpha blending to not work anymore. I'm pretty sure that the problem is somewhere in my calls to either OpenGL or cgfx - but I'm currently at a loss for where exactly the problem is, as everything looks like it is set up properly for alpha blending to occur. No OpenGL or cg framework errors are showing up, either. For some context, what I'm doing here is taking a buffer which contains screen position and luminance values for each pixel, copying it to a PBO, and using it as the vertex buffer for drawing GL_POINTS. Everything except for the alpha blending appears to be working as expected. I've confirmed both that the input vertex buffer has the correct values, and that my vertex and fragment shaders are outputting the points to the correct locations and with the correct luminance values. The way that I've arrived at the conclusion that the Alpha blending was broken is by making my vertex shader output every point to the same screen location and then setting the pixel shader to always output a value of float4(0.5) for that pixel. Invariably, the end color (dumped afterwards) ends up being float4(0.5). The confusing part is that as far as I can tell, everything is properly set for alpha blending to occur. The cgfx pass has the two following state assignments (among others - I'll put a full listing at the end): BlendEnable = true; BlendFunc = int2(One, One); This ought to be enough, since I am calling cgSetPassState() - and indeed, when I use glGets to check the values of GL_BLEND_SRC, GL_BLEND_DEST, GL_BLEND, and GL_BLEND_EQUATION they all look appropriate (GL_ONE, GL_ONE, GL_TRUE, and GL_FUNC_ADD). This check was done immediately after the draw call. I've been looking around to see if there's anything other than blending being enabled and the blending function being correctly set that would cause alpha blending not to occur, but without any luck. I considered that I could be doing something wrong with GL, but GL is telling me that blending is enabled. I doubt it's cgFX related (as otherwise the GL state wouldn't even be thinking it was enabled) but it still fails if I explicitly use GL calls to set the blend mode and enable it. Here's the trimmed down code for starting the cgfx pass and the draw call: CGtechnique renderTechnique = Filter->curTechnique; TEXUNITCHECK; CGpass pass = cgGetFirstPass(renderTechnique); TEXUNITCHECK; while (pass) { cgSetPassState(pass); cgUpdatePassParameters(pass); //drawFSPointQuadBuff((void*)PointQuad); drawFSPointQuadBuff((void*)LumPointBuffer); TEXUNITCHECK; cgResetPassState(pass); pass = cgGetNextPass(pass); }; and the function with the draw call: void drawFSPointQuadBuff(void* args) { PointBuffer* pointBuffer = (PointBuffer*)args; FBOERRCHECK; glClear(GL_COLOR_BUFFER_BIT); GLERRCHECK; glPointSize(1.0); GLERRCHECK; glEnableClientState(GL_VERTEX_ARRAY); GLERRCHECK; glEnable(GL_POINT_SMOOTH); if (pointBuffer-BufferObject) { glBindBufferARB(GL_ARRAY_BUFFER_ARB, (unsigned int)pointBuffer-BufData); glVertexPointer(pointBuffer-numComp, GL_FLOAT, 0, 0); } else { glVertexPointer(pointBuffer-numComp, GL_FLOAT, 0, pointBuffer-BufData); }; GLERRCHECK; glDrawArrays(GL_POINTS, 0, pointBuffer-numElem); GLboolean testBool; glGetBooleanv(GL_BLEND, &testBool); int iblendColor, iblendDest, iblendEquation, iblendSrc; glGetIntegerv(GL_BLEND_SRC, &iblendSrc); glGetIntegerv(GL_BLEND_DST, &iblendDest); glGetIntegerv(GL_BLEND_EQUATION, &iblendEquation); if (iblendEquation == GL_FUNC_ADD) { cerr << "Correct func" << endl; }; GLERRCHECK; if (pointBuffer-BufferObject) { glBindBufferARB(GL_ARRAY_BUFFER_ARB,0); } GLERRCHECK; glDisableClientState(GL_VERTEX_ARRAY); GLERRCHECK; }; Finally, here is the full state setting of the shader: AlphaTestEnable = false; DepthTestEnable = false; DepthMask = false; ColorMask = true; CullFaceEnable = false; BlendEnable = true; BlendFunc = int2(One, One); FragmentProgram = compile glslf std_PS(); VertexProgram = compile glslv bilatGridVS2();

    Read the article

  • Access violation reading location 0x00184000.

    - by numerical25
    having troubles with the following line HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mVB)); it appears the CreateBuffer method is having troubles reading &mVB. mVB is defined in box.h and looks like this ID3D10Buffer* mVB; Below is the code it its entirety. this is all files that mVB is in. //Box.cpp #include "Box.h" #include "Vertex.h" #include <vector> Box::Box() : mNumVertices(0), mNumFaces(0), md3dDevice(0), mVB(0), mIB(0) { } Box::~Box() { ReleaseCOM(mVB); ReleaseCOM(mIB); } float Box::getHeight(float x, float z)const { return 0.3f*(z*sinf(0.1f*x) + x*cosf(0.1f*z)); } void Box::init(ID3D10Device* device, float m, float n, float dx) { md3dDevice = device; mNumVertices = m*n; mNumFaces = 12; float halfWidth = (n-1)*dx*0.5f; float halfDepth = (m-1)*dx*0.5f; std::vector<Vertex> vertices(mNumVertices); for(DWORD i = 0; i < m; ++i) { float z = halfDepth - (i * dx); for(DWORD j = 0; j < n; ++j) { float x = -halfWidth + (j* dx); float y = getHeight(x,z); vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); if(y < -10.0f) vertices[i*n+j].color = BEACH_SAND; else if( y < 5.0f) vertices[i*n+j].color = LIGHT_YELLOW_GREEN; else if (y < 12.0f) vertices[i*n+j].color = DARK_YELLOW_GREEN; else if (y < 20.0f) vertices[i*n+j].color = DARKBROWN; else vertices[i*n+j].color = WHITE; } } D3D10_BUFFER_DESC vbd; vbd.Usage = D3D10_USAGE_IMMUTABLE; vbd.ByteWidth = sizeof(Vertex) * mNumVertices; vbd.BindFlags = D3D10_BIND_VERTEX_BUFFER; vbd.CPUAccessFlags = 0; vbd.MiscFlags = 0; D3D10_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &vertices; HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mVB)); //create the index buffer std::vector<DWORD> indices(mNumFaces*3); // 3 indices per face int k = 0; for(DWORD i = 0; i < m-1; ++i) { for(DWORD j = 0; j < n-1; ++j) { indices[k] = i*n+j; indices[k+1] = i*n+j+1; indices[k+2] = (i*1)*n+j; indices[k+3] = (i*1)*n+j; indices[k+4] = i*n+j+1; indices[k+5] = (i*1)*n+j+1; k+= 6; } } D3D10_BUFFER_DESC ibd; ibd.Usage = D3D10_USAGE_IMMUTABLE; ibd.ByteWidth = sizeof(DWORD) * mNumFaces*3; ibd.BindFlags = D3D10_BIND_INDEX_BUFFER; ibd.CPUAccessFlags = 0; ibd.MiscFlags = 0; D3D10_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &indices; HR(md3dDevice->CreateBuffer(&ibd, &iinitData, &mIB)); } void Box::Draw() { UINT stride = sizeof(Vertex); UINT offset = 0; md3dDevice->IASetVertexBuffers(0, 1, &mVB, &stride, &offset); md3dDevice->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); md3dDevice->DrawIndexed(mNumFaces*3, 0 , 0); } //Box.h #ifndef _BOX_H #define _BOX_H #include "d3dUtil.h" Box.h class Box { public: Box(); ~Box(); void init(ID3D10Device* device, float m, float n, float dx); void Draw(); float getHeight(float x, float z)const; private: DWORD mNumVertices; DWORD mNumFaces; ID3D10Device* md3dDevice; ID3D10Buffer* mVB; ID3D10Buffer* mIB; }; #endif Thanks again for the help

    Read the article

  • How to model dependency injection in UML ?

    - by hjo1620
    I have a Contract class. The contract is valid 1 Jan 2010 - 31 Dec 2010. It can be in state Active or Passive, depending on which date I ask the instance for it's state. ex. if I ask 4 July 2010, it's in state Active, but if I ask 1 Jan 2011, it's in state Passive. Instances are created using constructor dependency injection, i.e. they are either Active or Passive already when created, null is not allowed as a parameter for the internal state member. One initial/created vertex is drawn in UML. I have two arrows, leading out from the initial vertex, one leading to state Active and the other to state Passive. Is this a correct representation of dependency injection in UML ? This is related to http://stackoverflow.com/questions/2779922/how-model-statemachine-when-state-is-dependent-on-a-function which initiated the question on how to model DI in general, in UML.

    Read the article

  • Easy framework for OpenGL Shaders in C/C++

    - by Nils
    I just wanted to try out some shaders on a flat image. Turns out that writing a C program, which just takes a picture as a texture and applies, let's say a gaussian blur, as a fragment shader on it is not that easy: You have to initialize OpenGL which are like 100 lines of code, then understanding the GLBuffers, etc.. Also to communicate with the windowing system one has to use GLUT which is another framework.. Turns out that Nvidia's Fx composer is nice to play with shaders.. But I still would like to have a simple C or C++ program which just applies a given fragment shader to an image and displays the result. Does anybody have an example or is there a framework?

    Read the article

  • OpenGL ES 2.0 Rendering with a Texture

    - by Kyle
    The iPhone SDK has an example of using ES 2.0 with a set of (Vertex & Fragment) GLSL shaders to render a varying colored box. Is there an example out there on how to render a simple texture using this API? I basically want to take a quad, and draw a texture onto it. The old ES 1.1 API's don't work at all anymore, so I'm needing a bit of help getting started. Most shader references talk mainly about advanced shading topics, but I'm really unsure about how to tell the shader to use the bound texture, and how to reference the UV's. Thanks!

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Implementation of a distance matrix of a binary tree that is given in the adjacency-list representation

    - by Denise Giubilei
    Given this problem I need an O(n²) implementation of this algorithm: "Let v be an arbitrary leaf in a binary tree T, and w be the only vertex connected to v. If we remove v, we are left with a smaller tree T'. The distance between v and any vertex u in T' is 1 plus the distance between w and u." This is a problem and solution of a Manber's exercise (Exercise 5.12 from U. Manber, Algorithms: A Creative Approach, Addison-Wesley (1989).). The thing is that I can't deal properly with the adjacency-list representation so that the implementation of this algorithm results in a real O(n²) complexity. Any ideas of how the implementation should be? Thanks.

    Read the article

  • How to color a mesh with values at the vertices in WPF 3D?

    - by Christo
    We've got a sphere which we want to display in 3D and color given a fuction that depends on spherical coordinates. The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface. The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D. With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid anymore. Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?

    Read the article

  • Stuck on solving the Minimal Spanning Tree problem.

    - by kunjaan
    I have reduced my problem to finding the minimal spanning tree in the graph. But I want to have one more constraint which is that the total degree for each vertex shouldnt exceed a certain constant factor. How do I model my problem? Is MST the wrong path? Do you know any algorithms that will help me? One more problem: My graph has duplicate edge weights so is there a way to count the number of unique MSTs? Are there algorithms that do this? Thank You. Edit: By degree, I mean the total number of edges connecting the vertex. By duplicate edge weight I mean that two edges have the same weight.

    Read the article

  • Twin edges - Half edge data structure

    - by Pradeep Kumar
    I have implemented a Half-edge data structure for loading 3d objects. I find that the part of assigning twin/pair edges takes the longest computation time (especially for objects which have hundreds of thousands half edges). The reason is that I use nested loops to accomplish this. Is there a simpler and efficient way of doing this? Below is the code which I've written. HE is the half-edge data structure. hearr is a vector containing all the half edges. vert is the starting vertex and end is the ending vertex. Thanks!! HE *e1,*e2; for(size_t i=0;i<hearr.size();i++){ e1=hearr[i]; for(size_t j=1;j<hearr.size();j++){ e2=hearr[j]; if((e1->vert==e2->end)&&(e2->vert==e1->end)){ e1->twin=e2; e2->twin=e1; } } }

    Read the article

  • Tutoriel VBA/VB6 : Les extensions OpenGL en VBA et VB6, par Thierry Gasperment (Arkham46)

    Bonjour à tous! Voici un article sur la programmation des extensions OpenGL, en VB6/VBA Cet article décrit l'utilisation de quelques extensions fréquemment utilisées : - Les VBO (vertex buffer objects) pour améliorer les performances - Les textures 3D pour réaliser des textures continue sur un volume - Les shaders, largement utilisés pour programmer des effets graphiques Les exemples développés sont assez simples, mais ouvrent la porte à de nombreuses possibilités en 3D sous Visual Basic. Vous pouvez ajoutez vos commentaires sur cet articles à la suite de ce message.

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • OBJ model loaded in LWJGL has a black area with no texture

    - by gambiting
    I have a problem with loading an .obj file in LWJGL and its textures. The object is a tree(it's a paid model from TurboSquid, so I can't post it here,but here's the link if you want to see how it should look like): http://www.turbosquid.com/FullPreview/Index.cfm/ID/701294 I wrote a custom OBJ loader using the LWJGL tutorial from their wiki. It looks like this: public class OBJLoader { public static Model loadModel(File f) throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader(f)); Model m = new Model(); String line; Texture currentTexture = null; while((line=reader.readLine()) != null) { if(line.startsWith("v ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.verticies.add(new Vector3f(x,y,z)); }else if(line.startsWith("vn ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.normals.add(new Vector3f(x,y,z)); }else if(line.startsWith("vt ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); m.texVerticies.add(new Vector2f(x,y)); }else if(line.startsWith("f ")) { Vector3f vertexIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[0]), Float.valueOf(line.split(" ")[2].split("/")[0]), Float.valueOf(line.split(" ")[3].split("/")[0])); Vector3f textureIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[1]), Float.valueOf(line.split(" ")[2].split("/")[1]), Float.valueOf(line.split(" ")[3].split("/")[1])); Vector3f normalIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[2]), Float.valueOf(line.split(" ")[2].split("/")[2]), Float.valueOf(line.split(" ")[3].split("/")[2])); m.faces.add(new Face(vertexIndicies,textureIndicies,normalIndicies,currentTexture.getTextureID())); }else if(line.startsWith("g ")) { if(line.length()>2) { String name = line.split(" ")[1]; currentTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/" + name + ".png")); System.out.println(currentTexture.getTextureID()); } } } reader.close(); System.out.println(m.verticies.size() + " verticies"); System.out.println(m.normals.size() + " normals"); System.out.println(m.texVerticies.size() + " texture coordinates"); System.out.println(m.faces.size() + " faces"); return m; } } Then I create a display list for my model using this code: objectDisplayList = GL11.glGenLists(1); GL11.glNewList(objectDisplayList, GL11.GL_COMPILE); Model m = null; try { m = OBJLoader.loadModel(new File("res/untitled4.obj")); } catch (Exception e1) { e1.printStackTrace(); } int currentTexture=0; for(Face face: m.faces) { if(face.texture!=currentTexture) { currentTexture = face.texture; GL11.glBindTexture(GL11.GL_TEXTURE_2D, currentTexture); } GL11.glColor3f(1f, 1f, 1f); GL11.glBegin(GL11.GL_TRIANGLES); Vector3f n1 = m.normals.get((int) face.normal.x - 1); GL11.glNormal3f(n1.x, n1.y, n1.z); Vector2f t1 = m.texVerticies.get((int) face.textures.x -1); GL11.glTexCoord2f(t1.x, t1.y); Vector3f v1 = m.verticies.get((int) face.vertex.x - 1); GL11.glVertex3f(v1.x, v1.y, v1.z); Vector3f n2 = m.normals.get((int) face.normal.y - 1); GL11.glNormal3f(n2.x, n2.y, n2.z); Vector2f t2 = m.texVerticies.get((int) face.textures.y -1); GL11.glTexCoord2f(t2.x, t2.y); Vector3f v2 = m.verticies.get((int) face.vertex.y - 1); GL11.glVertex3f(v2.x, v2.y, v2.z); Vector3f n3 = m.normals.get((int) face.normal.z - 1); GL11.glNormal3f(n3.x, n3.y, n3.z); Vector2f t3 = m.texVerticies.get((int) face.textures.z -1); GL11.glTexCoord2f(t3.x, t3.y); Vector3f v3 = m.verticies.get((int) face.vertex.z - 1); GL11.glVertex3f(v3.x, v3.y, v3.z); GL11.glEnd(); } GL11.glEndList(); The currentTexture is an int - it contains the ID of the currently used texture. So my model looks absolutely fine without textures: (sorry I cannot post hyperlinks since I am a new user) i.imgur.com/VtoK0.png But look what happens if I enable GL_TEXTURE_2D: i.imgur.com/z8Kli.png i.imgur.com/5e9nn.png i.imgur.com/FAHM9.png As you can see an entire side of the tree appears to be missing - and it's not transparent, since it's not in the colour of the background - it's rendered black. It's not a problem with the model - if I load it using Kanji's OBJ loader it works fine(but the thing is,that I need to write my own OBJ loader) i.imgur.com/YDATo.png this is my OpenGL init section: //init display try { Display.setDisplayMode(new DisplayMode(Support.SCREEN_WIDTH, Support.SCREEN_HEIGHT)); Display.create(); Display.setVSyncEnabled(true); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(1.0f, 0.0f, 0.0f, 1.0f); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LESS); GL11.glDepthMask(true); GL11.glEnable(GL11.GL_NORMALIZE); GL11.glMatrixMode(GL11.GL_PROJECTION); GLU.gluPerspective (90.0f,800f/600f, 1f, 500.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glEnable(GL11.GL_CULL_FACE); GL11.glCullFace(GL11.GL_BACK); //enable lighting GL11.glEnable(GL11.GL_LIGHTING); ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse).flip()); GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS,(int)material_shinyness); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT2, GL11.GL_POSITION,(FloatBuffer)temp.asFloatBuffer().put(lightPosition2).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_AMBIENT,(FloatBuffer)temp.asFloatBuffer().put(lightAmbient).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_SPECULAR,(FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_CONSTANT_ATTENUATION, 0.1f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_LINEAR_ATTENUATION, 0.0f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_QUADRATIC_ATTENUATION, 0.0f); GL11.glEnable(GL11.GL_LIGHT2); Could somebody please help me?

    Read the article

  • Camera rotation - First Person Camera using GLM

    - by tempvar
    I've just switched from deprecated opengl functions to using shaders and GLM math library and i'm having a few problems setting up my camera rotations (first person camera). I'll show what i've got setup so far. I'm setting up my ViewMatrix using the glm::lookAt function which takes an eye position, target and up vector // arbitrary pos and target values pos = glm::vec3(0.0f, 0.0f, 10.0f); target = glm::vec3(0.0f, 0.0f, 0.0f); up = glm::vec3(0.0f, 1.0f, 0.0f); m_view = glm::lookAt(pos, target, up); i'm using glm::perspective for my projection and the model matrix is just identity m_projection = glm::perspective(m_fov, m_aspectRatio, m_near, m_far); model = glm::mat4(1.0); I send the MVP matrix to my shader to multiply the vertex position glm::mat4 MVP = camera->getProjection() * camera->getView() * model; // in shader gl_Position = MVP * vec4(vertexPos, 1.0); My camera class has standard rotate and translate functions which call glm::rotate and glm::translate respectively void camera::rotate(float amount, glm::vec3 axis) { m_view = glm::rotate(m_view, amount, axis); } void camera::translate(glm::vec3 dir) { m_view = glm::translate(m_view, dir); } and i usually just use the mouse delta position as the amount for rotation Now normally in my previous opengl applications i'd just setup the yaw and pitch angles and have a sin and cos to change the direction vector using (gluLookAt) but i'd like to be able to do this using GLM and matrices. So at the moment i have my camera set 10 units away from the origin facing that direction. I can see my geometry fine, it renders perfectly. When i use my rotation function... camera->rotate(mouseDeltaX, glm::vec3(0, 1, 0)); What i want is for me to look to the right and left (like i would with manipulating the lookAt vector with gluLookAt) but what's happening is It just rotates the model i'm looking at around the origin, like im just doing a full circle around it. Because i've translated my view matrix, shouldn't i need to translate it to the centre, do the rotation then translate back away for it to be rotating around the origin? Also, i've tried using the rotate function around the x axis to get pitch working, but as soon as i rotate the model about 90 degrees, it starts to roll instead of pitch (gimbal lock?). Thanks for your help guys, and if i've not explained it well, basically i'm trying to get a first person camera working with matrix multiplication and rotating my view matrix is just rotating the model around the origin.

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • Surface of Revolution with 3D surface

    - by user5584
    I have to use this function to get a Surface of Revolution (homework). newVertex = (oldVertex.y, someFunc1(oldVertex.x, oldVertex.y), someFunc2(oldVertex.x, oldVertex.y)); As far as I know (FIXME) Surface of Revolution means rotations of a (2D)curve around an axis in 3D. But this vertex computing gives a 3D plane (FIXME again :D), so rotation of this isn't obvious. Am I misunderstanding something?

    Read the article

  • How can I compile SM 3.0 effects in D3D11 in slimdx?

    - by jacker
    var bytecode = ShaderBytecode.CompileFromFile("shaders\\testShader.fx", "fx_5_0", ShaderFlags.None, SlimDX.D3DCompiler.EffectFlags.None, null, null, out str); var effect = new SlimDX.Direct3D11.Effect(gpu.Device, bytecode); Works fine but if I try to use another shader model like 4.0 or 3.0 it throws an error on the new effect creation: E_FAIL: An undetermined error occurred (-2147467259) How do I compile older shaders? And I've read about device context but I can't find any information on how to use them to maintain DX9 compatibility.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >