Search Results

Search found 812 results on 33 pages for 'computational geometry'.

Page 20/33 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Creating models in 3ds max and exporting as .x for XNA

    - by Sweta Dwivedi
    I have created a few models in 3DS max which contains textures, geometry and animations . .however .fbx doesnt really support textures.. So im planning to use .x format.. I have seen a few converters in pandasoft but once i unzip the file and place the .dle file in the plugins folder of 3D max gives an error saying failed to initialize.. Is there any way to convert my .max models into .x format ? ? I dont know blender so that isnt an option. . I'm currently using 3ds max 2013 After adding the .3DS object content importer. . i get the following error:

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • Is there any advantage in using DX10/11 for a 2D game?

    - by David Gouveia
    I'm not entirely familiar with the feature set introduced by DX10/11 class hardware. I'm vaguely familiar with the new stages added to the programmable graphics pipeline, such as the geometry shader, the compute shader, and the new tesselation stages. I don't see how any of these make much of a difference for a 2D game though. Is there any compelling reason to make the switch to DX10/11 (or the OpenGL equivalents) for a 2D game, or would it be wiser to stick with DX9 considering that that a significant share of the market still runs on older technologies (e.g. the February 2012 Steam surveys lists around 17% of users as still using Windows XP)?

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • how to make Chromium-browser start on vnc display?

    - by Oleksandr Dudchenko
    I have started Tightvncserver on Lubuntu 12.04 via the command $ tightvncserver -geometry 800x600 -depth 16 :2 VNC server successfully started and I got message like follows. New 'X' desktop is gateway:2 Starting applications specified in /home/dolv/.vnc/xstartup Log file is /home/dolv/.vnc/gateway:2.log Then I've successfully loged in from remote PC using realvncclient. Trying to start Chromium-browser from menu... no luck. There was one more attempt: I opened the LXTerminal from menu. Trying to start is from terminal with the command /usr/bim/chromium-browser & it returned the message like follows: Xlib: extention "RANDR" missing on desktop :2 I have also discovered that after my two attampts the chromium-browser has created 2 new windows on the host on which was session running on display :0 The Question: How to make the browser start on that display from which it was called (in my occasion from vnc session display)?

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Is the copy/paste approach professionally viable when working with the Google Maps API?

    - by Ian Campbell
    I find that I understand much of the Javascript concepts used in the Google Maps API code, but then again there is quite a bit that is way over my head in syntax. For example, the geocoder syntax seems to be of Ajax form, though I don't understand what is happening under the hood (especially with lines like results[0].geometry.location). I am able to modify the body of if (status == google.maps.GeocoderStatus.OK) for different purposes though. So, being that I am able to take various code from the Developer's Guide and rework it to an extent for my own purposes, all the while not fully understanding what Google Maps is actually doing, does this make me a copy-paste programmer? Is this a bad practice, or is this professionally viable? I am, of course, interested in learning as much as I can, but what if time-constraints outweigh the learning process?

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • I am new to game development, what do I need to know? [closed]

    - by farmdve
    I am unsure if this question is a duplicate, I hope it isn't. Are there any resources on the terminology when doing game development? Because, even if you tell me to learn some graphics API, how would I understand the things it does, if I am not well into the terminology(voxel,mesh,polygon,shading). What about the math that is involved in the game(geometry) or the concept of the gravity,collision detection in the game and their respective maths? I am very bad at math, never was good, because I have ADHD, but I won't give up just yet. I look at a game, and I see "textures", but how am I walking on them, how do they take substance so I don't fall off of them? And depth? This is what I need information about, not just a link to a library like SDL(which I have compiled under MinGW and MinGW-W64) and tell me to learn it and the cliché answer "start simple/small". I hope the question(s) are not too vague.

    Read the article

  • System hangs at glReadPixel call with GL_TEXTURE_2D_ARRAY for texturing

    - by Roshan
    I am calling glReadPixel after glDrawArray call. I am rendering a geometry with 3D texture on it as a target GL_TEXTURE_2D_ARRAY. My systems hangs at glreadpixel call. When i use target as GL_TEXTURE_3D the issue does not occurs and it correctly reads the framebuffer contents. glReadPixels(0, 0, GetViewportWidth(), GetViewportHeight(), GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)rendered_pixels); I am using SNORM textures with GL_byte data in glTeximage3D call and I am not calling glPixelStorei, is it because of this? What should be the parameter for pixelstore call?

    Read the article

  • XNA: draw a sprite in 3d, is that possible?

    - by Heisenbug
    since now I always used sprited to draw in 2D: spriteBatch.Draw(myTexture, rectangle, color); (I suppose the texture is binded internally to 2 triangles and then scaled.) Now, I'm porting my game in 3D and I have to draw several planes (walls, floor, roof,..). Do I need to manually binding a texture to a geometry (for example using VertexPositionColorTexture with VertexBuffer and IndexBuffer), or is there any simpler way to do that? I'm looking for something like spriteBatch.Draw with the rectangle clip specified in 3d space: spriteBatch.Draw(myTexture, rectangleIn3D, color);

    Read the article

  • My gnome terminal keep opening new window

    - by evan
    I actually want to change the default window position of gnome-terminal in my Ubuntu 12.04 system. After some search, I found some one else use the command gnome-terminal --geometry=120x80+50+50 to set the default position. And I actually don't know where to paste the command, so I pasted it to 'custome command' field of terminal's profile. Now when I open one terminal, it just keep opening new ones and I have no way to stop it other than ctrl+C. I even removed .gconf/gnome-termial/ folder and it didn't worked. Can someone help me?

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • Animate sprite/texture position with VBO

    - by Dono
    I'm currently worlking on a renderer for my projects and I want animate a sprite on screen. I've got a spritesheet but I don't know what is the the best way to update the texture coordinates for each vertex. Update vertices then update vertex buffer. (Heavy ?) Send to the shader my texture coordinates (It is possible ?) Don't use VBO ? By the way, I've got this structure : Object class with Geometry (Faces + Vertex + Buffer) and Material (Shader + other stuff ) properties, it is a good structure ? Thanks!

    Read the article

  • What is the minimum of shader I need to use to run basic calculation on GPU?

    - by Jinxi
    I read, that the Hull Shader, Domain Shader, Geometry Shader and Pixel Shader can be used optional. So, is the Vertex Shader optional too? If no: What does a basic Vertex Shader look like? Just like a simple pass through? Is the Vertex Shader necessary to tell what kind of datastructure (Van Stripes or Meshes) are used? What can I do, with just the vertex shader? Are the fixed functions working without any help of programming a programmable stage?

    Read the article

  • Is an extra collision-mesh for level-data worth the hassle?

    - by Serthy
    What is the optimal approach for collision-detection with the environment in an 3D engine (with triangle mesh based geometry, no bsp)? A) Use the render mesh [+] no need for additional work for artists to fiddle with collision detection [-] high detail is harder for physics calculation [+/-] maybe use collidable flags for materials [+/-] compute the collision-mesh from the render-mesh B) Use an additional collision mesh [+] faster/more optimal collision-detection [-] additional work (either by the artist or by the programmer who has to develop an algorithm to compute it from the render-mesh) [-] more memory useage How do AAA title handle this? And what are the indie dev's approaches?

    Read the article

  • Multiple ( V- / I- ) Buffers, is it sane?

    - by Techie
    Currently I am developing an RTS game using XNA ( / ANX.Framework ). There is one thing bothers me. I am not sure in what way or how to organise Buffers. Should I use a new Vertexbuffer for any object ( e.g. a Char, a Table, an model ) or is it better to use ONE HUGE/ BIG Buffer to store any geometry in? I am still new to 3D Programming though I finished yet couple games using DirectX 9. Well, I hope this question qin't a duplicate and I appreciate any answer leading me into the right direction.

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • Is the "impossible object" possible in computer graphics?

    - by CPP_Person
    This may be a silly question but I want to know the answer to it. I saw this thing called the "impossible object", while they're many different images of this online, it's suppost to be impossible geometry. Here is an example: Now as far as logic goes, I know you don't have to obey it in games, such as a flying cow, or an impossible object. So that's out of the way, but what stands in my way is whether or not there is a way to draw this onto a 3D scene. Like is there a way to represent it as a 3D object? Thanks!

    Read the article

  • How can I derive force vectors from velocity vectors?

    - by PixelRouter
    I'm making a 2d shooter ala Geometry Wars. I've got my own simple physics at work driving the background grid and all my entities. To move anything in the world I apply a Vector2d force to it. The 'engine' calculates the resulting acceleration and therefore the velocity. I am trying to port some code I found which implements the classic 'Boids' flocking algorithm, but the code I have works by calculating the Boids' velocities directly, so If i use it as is, it bypasses my physics engine. How I can translate the velocity vectors into force vectors that I can apply to the Boids and which will result in the proper velocities via my physics engine.

    Read the article

  • Do I need a Point and a Vector object? Or just using a Vector object to represent a Point is ok?

    - by JCM
    Structuring the components of an engine that I am developing along with a friend (learning purposes), I came to this doubt. Initially we had a Point constructor, like the following: var Point = function( x, y ) { this.x = x; this.y = y; }; But them we started to add some Vector math to it, and them decided to rename it to Vector2d. But now, some methods are a bit confusing (at least in my opinion), such as the following, which is used to make a line: //before the renaming of Point to Vector2, the parameters were startingPoint and endingPoint Geometry.Line = function( startingVector, endingVector ) { //... }; I should make a specific constructor for the Point object, or there are no problems in defining a point as a vector? I know a vector have magnitude and direction, but I see so many people using a vector to just represent the position of an object.

    Read the article

  • cloud/grid computing

    - by tom smith
    Hi guys. I'm appologizing in advance to the guys who will tell me this isn't a tech/server/IT issue! But I've been beating my head around this for a couple of days now. I'm trying figure out who to talk to, or which company I can approach to try to see if there are Grid/Cloud Computing companies who have programs setup to deal with colleges. I'm dealing with a compsci course, and we're looking at a few projects that would require a great deal of computing/computational resources. But in calling different companies (HP/Rackspace/etc..) I'm either not getting through to the right depts, or to the right people, or the companies just aren't setup for this. There are plenty of companies who have discounts for desktop software/hardware, but who in the biz deals with discounts/offerings for Cloud/Grid Computing solutions?? Any thoughts/pointers would be greatly appreciated. Thanks -tom

    Read the article

  • Kill program after it outputs a given line, from a shell script

    - by Paul
    Background: I am writing a test script for a piece of computational biology software. The software I am testing can take days or even weeks to run, so it has a recover functionality built in, in the case of system crashes or power failures. I am trying to figure out how to test the recovery system. Specifically, I can't figure out a way to "crash" the program in a controlled manner. I was thinking of somehow timing a SIGKILL instruction to run after some amount of time. This is probably not ideal, as the test case isn't guaranteed to run the same speed every time (it runs in a shared environment), so comparing the logs to desired output would be difficult. This software DOES print a line for each section of analysis it completes. Question: I was wondering if there was a good/elegant way (in a shell script) to capture output from a program and then kill the program when a given line/# of lines is output by the program?

    Read the article

  • What's a good tool for collecting statistics on filesystem usage?

    - by Kamil Kisiel
    We have a number of filesystems for our computational cluster, with a lot of users that store a lot of really large files. We'd like to monitor the filesystem and help optimize their usage of it, as well as plan for expansion. In order to this, we need some way to monitor how these filesystems are used. Essentially I'd like to know all sorts of statistics about the files: Age Frequency of access Last accessed times Types Sizes Ideally this information would be available in aggregate form for any directory so that we could monitor it based on project or user. Short of writing something up myself in Python, I haven't been able to find any tools capable of performing these duties. Any recommendations?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >