Search Results

Search found 812 results on 33 pages for 'computational geometry'.

Page 1/33 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • Geometry shader for multiple primitives

    - by Byte56
    How can I create a geometry shader that can handle multiple primitives? For example when creating a geometry shader for triangles, I define a layout like so: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; But if I use this shader then lines or points won't show up. So adding: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; layout(lines) in; layout(line_strip, max_vertices=2) out; The shader will compile and run, but will only render lines (or whatever the last primitive defined is). So how do I define a single geometry shader that will handle multiple types of primitives? Or is that not possible and I need to create multiple shader programs and change shader programs before drawing each type?

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • Generating geometry when using VBO

    - by onedayitwillmake
    Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads. I am doing this by storing a STD::Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays. I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader / VBO approach

    Read the article

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Coordinate geometry operations in images/discrete space

    - by avd
    I have images which have line segments, rays etc. I am representing these line segments using Bresenham algorithm (means whatever coordinates I get using this algorithm between two points). Now I want to do operations such as finding intersection point between two line segments, finding the projection of one vector onto other etc... The problem is I am not working in continuous space. The line segments are being approximated using Bresenham algorithm. So I want suggestions on what are the best and most efficient ways to do this? A link to C++ library or implementation would also be good enough. Please suggest some books which deal with such problems.

    Read the article

  • Geometry Shader input vertices order

    - by NPS
    MSDN specifies (link) that when using triangleadj type of input to the GS, it should provide me with 6 vertices in specific order: 1st vertex of the triangle processed, vertex of an adjacent triangle, 2nd vertex of the triangle processed, another vertex of an adjacent triangle and so on... So if I wanted to create a pass-through shader (i.e. output the same triangle I got on input and nothing else) I should return vertices 0, 2 and 4. Is that correct? Well, apparently it isn't because I did just that and when I ran my app the vertices were flickering (like changing positions/disappearing/showing again or sth like that). But when I instead output vertices 0, 1 and 2 the app rendered the mesh correctly. I could provide some code but it seems like the problem is in the input vertices order, not the code itself. So what order do input vertices to the GS come in?

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Geometry Shader : points + Triangles

    - by CmasterG
    I have different Shaders and for each Shader a instance of the ShaderClass class, which initializes the Shaders, Renders the Shaders, etc. I use most of the Shaderclasses without Geometry Shader, but in one Shader Class i also use a Geometry Shader. The problem is, that when I render one object with the Shaderclass that uses the Geometry shader, all other object are rendered with the same geometry that I create in the Geometry Shader. Can you help me? Is it possible that I have to use a Geometry Shader for each object, when I use one for one object? I use DirectX 11 with C++.

    Read the article

  • Sphere-Sphere intersection and Circle-Sphere intersection

    - by cagirici
    I have code for circle-circle intersection. But I need to expand it to 3-D. How do I calculate: Radius and center of the intersection circle of two spheres Points of the intersection of a sphere and a circle? Given two spheres (sc0,sr0) and (sc1,sr1), I need to calculate a circle of intersection whose center is ci and whose radius is ri. Moreover, given a sphere (sc0,sr0) and a circle (cc0, cr0), I need to calulate the two intersection points (pi0, pi1) I have checked this link and this link, but I could not understand the logic behind them and how to code them. I tried ProGAL library for sphere-sphere-sphere intersection, but the resulting coordinates are rounded. I need precise results.

    Read the article

  • GLSL check if fragment is on geometry

    - by mokaschitta
    I am currently writing the positions of my geometry to the RGB channels of gl_FragColor and I would like to write 1.0 to the alpha channel if the fragment is part of geometry, and 0.0 if its empty. Is there a simple way to tell if a fragment is geometry or not? Maybe through gl_FragCoord.z? thanks

    Read the article

  • How to fix file system's CHS geometry?

    - by eigenein
    I'm trying to check FAT16 file system with GParted and the check fails with the following message: The file system's CHS geometry is (484, 16383, 63) is invalid. The partition table's CHS geometry is (31130, 255, 63). If you select Ignore, the file system's CHS geometry will be left unchanged. If you select Fix, the file system's geometry will be set to match the partition table's CHS geometry. The check just fails without any Ignore/Fix prompting. How do I fix this?

    Read the article

  • Multiple mesh with one geometry and diferent textures. Error

    - by user1821834
    I have a loop where I create a multiple Mesh with different geometry, because each mesh has one texture: .... var geoCube = new THREE.CubeGeometry(voxelSize, voxelSize, voxelSize); var geometry = new THREE.Geometry(); for( var i = 0; i < voxels.length; i++ ){ var voxel = voxels[i]; var object; color = voxel.color; texture = almacen.textPlaneTexture(voxel.texto,color,voxelSize); //Return the texture with a color and a text for each face of the geometry material = new THREE.MeshBasicMaterial({ map: texture }); object = new THREE.Mesh(geoCube, material); THREE.GeometryUtils.merge( geometry, object ); } //Add geometry merged at scene mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial() ); mesh.geometry.computeFaceNormals(); mesh.geometry.computeVertexNormals(); mesh.geometry.computeTangents(); scene.add( mesh ); .... But now I have this error in the javascript code Three.js Uncaught TypeError: Cannot read property 'map' of undefined In the funcion: function bufferGuessUVType ( material ) { .... } Update: Finally I have removed the merged solution and I can use an unnique geometry for the all voxels. Altough I think that If I use merge meshes the app would have a better performance...

    Read the article

  • Computational geometry: find where the triangle is after rotation, translation or reflection on a mi

    - by newba
    I have a small contest problem in which is given a set of points, in 2D, that form a triangle. This triangle may be subject to an arbitrary rotation, may be subject to an arbitrary translation (both in the 2D plane) and may be subject to a reflection on a mirror, but its dimensions were kept unchanged. Then, they give me a set of points in the plane, and I have to find 3 points that form my triangle after one or more of those geometric operations. Example: 5 15 8 5 20 10 6 5 17 5 20 20 5 10 5 15 20 15 10 I bet that have to apply some known algorithm, but I don't know which. The most common are: convex hull, sweep plane, triangulation, etc. Can someone give a tip? I don't need the code, only a push, please!

    Read the article

  • Nautilus window initial geometry with Gnome 3

    - by elomage
    I would like to open one or more Nautilus windows from the command line or a script at certain positions on my screen/desktop while in Gnome3. I could do this in Ubuntu 11.10 by specifying the geometry. For example, to open the window at the bottom right corner from the command line I could use: nautilus --geometry 600x475-0-0 ~/mystuff But using Gnome3 the geometry option is ignored, or overriden. Is there a way to make this work?

    Read the article

  • Dependency property does not work within a geometry in a controltemplate

    - by Erik Bongers
    I have a DepencencyProperty (a boolean) that works fine on an Ellipse, but not on an ArcSegment. Am I doing something that is not possible? Here's part of the xaml. Both the TemplateBindings of Origin and LargeArc do not work in the geometry. But the LargeArc DependencyProperty does work in the Ellipse, so my DependencyProperty seems to be set up correctly. <ControlTemplate TargetType="{x:Type nodes:TestCircle}"> <Canvas Background="AliceBlue"> <Ellipse Height="10" Width="10" Fill="Yellow" Visibility="{TemplateBinding LargeArc, Converter={StaticResource BoolToVisConverter}}"/> <Path Canvas.Left="0" Canvas.Top="0" Stroke="Black" StrokeThickness="3"> <Path.Data> <GeometryGroup> <PathGeometry> <PathFigure IsClosed="True" StartPoint="{TemplateBinding Origin}"> <LineSegment Point="150,100" /> <ArcSegment Point="140,150" IsLargeArc="{TemplateBinding LargeArc}" Size="50,50" SweepDirection="Clockwise"/> </PathFigure> </PathGeometry> </GeometryGroup> </Path.Data> </Path> </Canvas> </ControlTemplate> What I'm trying to build is a (sort of) pie-shaped usercontrol where the shape of the Pie is defined by DependencyProperties and the actual graphics used are in a template, so they can be replaced or customized. In other words: I would like the code-behind to be visual-free (which, I assume, is good separation). SOLUTION--------------------------(I'm not allowed to answer my own questions yet) I found the answer myself, and this can be useful for others encountering the same issue. This is why the TemplateBinding on the Geometry failed: A TemplateBinding will only work when binding a DependencyProperty to another DependencyProperty. Following article set me on the right track: http://blogs.msdn.com/b/liviuc/archive/2009/12/14/wpf-templatebinding-vs-relativesource-templatedparent.aspx The ArcSegment properties are no DependencyProperties. Thus, the solution to the above problem is to replace <ArcSegment Point="140,150" IsLargeArc="{TemplateBinding LargeArc}" with <ArcSegment Point="140,150" IsLargeArc="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=LargeArc}" Colin, your working example where an 'ordinary' binding was used in the geometry set me on the right track. BTW, love the infographics and the construction of your UserControl in your blogpost. And, hey, that quick tip on code snippets, and especially on that DP attribute and the separation of those DPs into a partial class file is pure gold!

    Read the article

  • .NET Geometry Library

    - by dewald
    Does anyone know of a good (efficient, nice API, etc.) geometry open source library for .NET? Some of the operations needed: Data Structures Vectors (2D and 3D with floats and doubles) Lines (2D and 3D) Rectangles / Squares / Cubes / Boxes Spheres / Circles N-Sided Polygon Matrices (floats and doubles) Algorithms Intersection calculations Area / Volume calculations

    Read the article

  • Google maps Geometry Controls from GMaps Utility Library

    - by TiagoMartins
    hi everybody, i'm working on google maps in specifically on geometry controls the point is, in this example when I click in line or polygon infowindow show up, but the language is english (by default I think) can I change the language? in the tooltips i can replace the text, but in this particular case i have no place do replace it, this let me thinking that "language" is automatic, i'm wrong? best regards

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Why does setting a geometry shader cause my sprites to vanish?

    - by ChaosDev
    My application has multiple screens with different tasks. Once I set a geometry shader to the device context for my custom terrain, it works and I get the desired results. But then when I get back to the main menu, all sprites and text disappear. These sprites don't dissappear when I use pixel and vertex shaders. The sprites are being drawn through D3D11, of course, with specified view and projection matrices as well an input layout, vertex, and pixel shader. I'm trying DeviceContext->ClearState() but it does not help. Any ideas? void gGeometry::DrawIndexedWithCustomEffect(gVertexShader*vs,gPixelShader* ps,gGeometryShader* gs=nullptr) { unsigned int offset = 0; auto context = mp_D3D->mp_Context; //set topology context->IASetPrimitiveTopology(m_Topology); //set input layout context->IASetInputLayout(mp_inputLayout); //set vertex and index buffers context->IASetVertexBuffers(0,1,&mp_VertexBuffer->mp_Buffer,&m_VertexStride,&offset); context->IASetIndexBuffer(mp_IndexBuffer->mp_Buffer,mp_IndexBuffer->m_DXGIFormat,0); //send constant buffers to shaders context->VSSetConstantBuffers(0,vs->m_CBufferCount,vs->m_CRawBuffers.data()); context->PSSetConstantBuffers(0,ps->m_CBufferCount,ps->m_CRawBuffers.data()); if(gs!=nullptr) { context->GSSetConstantBuffers(0,gs->m_CBufferCount,gs->m_CRawBuffers.data()); context->GSSetShader(gs->mp_D3DGeomShader,0,0);//after this call all sprites disappear } //set shaders context->VSSetShader( vs->mp_D3DVertexShader, 0, 0 ); context->PSSetShader( ps->mp_D3DPixelShader, 0, 0 ); //draw context->DrawIndexed(m_indexCount,0,0); } //sprites void gSpriteDrawer::Draw(gTexture2D* texture,const RECT& dest,const RECT& source, const Matrix& spriteMatrix,const float& rotation,Vector2d& position,const Vector2d& origin,const Color& color) { VertexPositionColorTexture* verticesPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; unsigned int TriangleVertexStride = sizeof(VertexPositionColorTexture); unsigned int offset = 0; float halfWidth = ( float )dest.right / 2.0f; float halfHeight = ( float )dest.bottom / 2.0f; float z = 0.1f; int w = texture->Width(); int h = texture->Height(); float tu = (float)source.right/(w); float tv = (float)source.bottom/(h); float hu = (float)source.left/(w); float hv = (float)source.top/(h); Vector2d t0 = Vector2d( hu+tu, hv); Vector2d t1 = Vector2d( hu+tu, hv+tv); Vector2d t2 = Vector2d( hu, hv+tv); Vector2d t3 = Vector2d( hu, hv+tv); Vector2d t4 = Vector2d( hu, hv); Vector2d t5 = Vector2d( hu+tu, hv); float ex=(dest.right/2)+(origin.x); float ey=(dest.bottom/2)+(origin.y); Vector4d v4Color = Vector4d(color.r,color.g,color.b,color.a); VertexPositionColorTexture vertices[] = { { Vector3d( dest.right-ex, -ey, z),v4Color, t0}, { Vector3d( dest.right-ex, dest.bottom-ey , z),v4Color, t1}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t2}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t3}, { Vector3d( -ex, -ey , z),v4Color, t4}, { Vector3d( dest.right-ex, -ey , z),v4Color, t5}, }; auto mp_context = mp_D3D->mp_Context; // Lock the vertex buffer so it can be written to. mp_context->Map(mp_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Get a pointer to the data in the vertex buffer. verticesPtr = (VertexPositionColorTexture*)mappedResource.pData; // Copy the data into the vertex buffer. memcpy(verticesPtr, (void*)vertices, (sizeof(VertexPositionColorTexture) * 6)); // Unlock the vertex buffer. mp_context->Unmap(mp_vertexBuffer, 0); //set vertex shader mp_context->IASetVertexBuffers( 0, 1, &mp_vertexBuffer, &TriangleVertexStride, &offset); //set texture mp_context->PSSetShaderResources( 0, 1, &texture->mp_SRV); //set matrix to shader mp_context->UpdateSubresource(mp_matrixBuffer, 0, 0, &spriteMatrix, 0, 0 ); mp_context->VSSetConstantBuffers( 0, 1, &mp_matrixBuffer); //draw sprite mp_context->Draw( 6, 0 ); }

    Read the article

  • Do GLSL geometry shaders work on the GMA X3100 under OSX

    - by GameFreak
    I am trying to use a trivial geometry shader but when run in Shader Builder on a laptop with a GMA X3100 it falls back and uses the software render. According this document the GMA X3100 does support EXT_geometry_shader4. The input is POINTS and the output is LINE_STRIP. What would be required to get it to run on the GPU (if possible) uniform vec2 offset; void main() { gl_Position = gl_PositionIn[0]; EmitVertex(); gl_Position = gl_PositionIn[0] + vec4(offset.x,offset.y,0,0); EmitVertex(); EndPrimitive(); }

    Read the article

  • A problem with connected points and determining geometry figures based on points' location analysis

    - by StolePopov
    In school we have a really hard problem, and still no one from the students has solved it yet. Take a look at the picture below: http://d.imagehost.org/0422/mreza.gif That's a kind of a network of connected points, which doesn't end and each point has its own number representing it. Let say the numbers are like this: 1-23-456-78910-etc. etc.. (You can't see the number 5 or 8,9... on the picture but they are there and their position is obvious, the point in middle of 4 and 6 is 5 and so on). 1 is connected to 2 and 3, 2 is connected to 1,3,5 and 4 etc. The numbers 1-2-3 indicate they represent a triangle on the picture, but the numbers 1-4-6 do not because 4 is not directly connected with 6. Let's look at 2-3-4-5, that's a parallelogram (you know why), but 4-6-7-9 is NOT a parallelogram because the in this problem there's a rule which says all the sides must be equal for all the figures - triangles and parallelograms. Also there are hexagons, for ex. 4-5-7-9-13-12 is a hexagon - all sides must be equal here too. 12345 - that doesn't represent anything, so we ignore it. I think i explained the problem well. The actual problem which is given to us by using an input of numbers like above to determine if that's a triangle/parallelogram/hexagon(according to the described rules). For ex: 1 2 3 - triangle 11 13 24 26 -parallelogram 1 2 3 4 5 - nothing 11 23 13 25 - nothing 3 2 5 - triangle I was reading computational geometry in order to solve this, but i gave up quickly, nothing seems to help here. One friend told me this site so i decided to give it a try. If you have any ideas about how to solve this, please reply, you can use pseudo code or c++ whatever. Thank you very much.

    Read the article

  • Trade offs of linking versus skinning geometry

    - by Jeff
    What are the trade offs between inherent in linking geometry to a node versus using skinned geometry? Specifically: What capabilities do you gain / lose from using each method? What are the performance impacts of doing one over the other? What are the specific situations where you would want to do one over the other? In addition, do the answers to these questions tend to be engine specific? If so, how much?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >