Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 268/659 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • 3DS Max 2012 OBJ file import missing polygons

    - by Vit
    I started learning OpenGL. I got to a point I want to import some "real" objects. After "Googling" I decided I will go with OBJ file for start, since it is simple to understand, and there are plenty of tutorials on how to read them properly. I have from university access to 3DS Max 2012. So I tried to create very simple model (just deformated cube) and exporting it using OBJ file, just to vertices and triangles for the moment, without textures, so I can examine its structure by myself. But if I imported it right back to 3DS from OBJ file, now it renders somewhat strange, like its smoothen, and with lightsource, even I have none in scene. But the geometry, its wireframe is intact. So I thought maybe it is problem of exporting only vertices and triangles so I downloaded Enterprise-D model from internet, exported with everything on (normals, textures everything), and again imported it. Now, some polygons are missing. So, I want to ask, am I doing something terribly wrong, or is there some incompatibility issue between .max and .obj file ? Even it is only simple textured model without any lightsources, animation etc.? Thanks. Edit: I tried objects with MeshLab, the first, deformated cube was absolutelly OK. But still bothers me that 3DS Max doesen´t render it properly. In Enterprise-D model, there are polygons missing even in MeshLab. I uploaded rar archive with .max model of Enterprise, same .obj model exported from 3DS, and obj model of deformated cube. Download here (2.5 MB, filesonic).

    Read the article

  • Writing to XML issue Unity3D C#

    - by N0xus
    I'm trying to create a tool using Unity to generate an XML file for use in another project. Now, please, before someone suggests I do it in something, the reason I am using Unity is that it allows me to easily port this to an iPad or other device with next to no extra development. So please. Don't suggest to use something else. At the moment, I am using the following code to write to my XML file. public void WriteXMLFile() { string _filePath = Application.dataPath + @"/Data/HV_DarkRideSettings.xml"; XmlDocument _xmlDoc = new XmlDocument(); if (File.Exists(_filePath)) { _xmlDoc.Load(_filePath); XmlNode rootNode = _xmlDoc.CreateElement("Settings"); // _xmlDoc.AppendChild(rootNode); rootNode.RemoveAll(); XmlElement _cornerNode = _xmlDoc.CreateElement("Screen_Corners"); _xmlDoc.DocumentElement.PrependChild(_cornerNode); #region Top Left Corners XYZ Values // indent top left corner value to screen corners XmlElement _topLeftNode = _xmlDoc.CreateElement("Top_Left"); _cornerNode.AppendChild(_topLeftNode); // set the XYZ of the top left values XmlElement _topLeftXNode = _xmlDoc.CreateElement("TopLeftX"); XmlElement _topLeftYNode = _xmlDoc.CreateElement("TopLeftY"); XmlElement _topLeftZNode = _xmlDoc.CreateElement("TopLeftZ"); // indent these values to the top_left value in XML _topLeftNode.AppendChild(_topLeftXNode); _topLeftNode.AppendChild(_topLeftYNode); _topLeftNode.AppendChild(_topLeftZNode); #endregion #region Bottom Left Corners XYZ Values // indent top left corner value to screen corners XmlElement _bottomLeftNode = _xmlDoc.CreateElement("Bottom_Left"); _cornerNode.AppendChild(_bottomLeftNode); // set the XYZ of the top left values XmlElement _bottomLeftXNode = _xmlDoc.CreateElement("BottomLeftX"); XmlElement _bottomLeftYNode = _xmlDoc.CreateElement("BottomLeftY"); XmlElement _bottomLeftZNode = _xmlDoc.CreateElement("BottomLeftZ"); // indent these values to the top_left value in XML _bottomLeftNode.AppendChild(_bottomLeftXNode); _bottomLeftNode.AppendChild(_bottomLeftYNode); _bottomLeftNode.AppendChild(_bottomLeftZNode); #endregion #region Bottom Left Corners XYZ Values // indent top left corner value to screen corners XmlElement _bottomRightNode = _xmlDoc.CreateElement("Bottom_Right"); _cornerNode.AppendChild(_bottomRightNode); // set the XYZ of the top left values XmlElement _bottomRightXNode = _xmlDoc.CreateElement("BottomRightX"); XmlElement _bottomRightYNode = _xmlDoc.CreateElement("BottomRightY"); XmlElement _bottomRightZNode = _xmlDoc.CreateElement("BottomRightZ"); // indent these values to the top_left value in XML _bottomRightNode.AppendChild(_bottomRightXNode); _bottomRightNode.AppendChild(_bottomRightYNode); _bottomRightNode.AppendChild(_bottomRightZNode); #endregion _xmlDoc.Save(_filePath); } } This generates the following XML file: <Settings> <Screen_Corners> <Top_Left> <TopLeftX /> <TopLeftY /> <TopLeftZ /> </Top_Left> <Bottom_Left> <BottomLeftX /> <BottomLeftY /> <BottomLeftZ /> </Bottom_Left> <Bottom_Right> <BottomRightX /> <BottomRightY /> <BottomRightZ /> </Bottom_Right> </Screen_Corners> </Settings> Which is exactly what I want. However, each time I press the button that has the WriteXMLFile() method attached to it, it's write the entire lot again. Like so: <Settings> <Screen_Corners> <Top_Left> <TopLeftX /> <TopLeftY /> <TopLeftZ /> </Top_Left> <Bottom_Left> <BottomLeftX /> <BottomLeftY /> <BottomLeftZ /> </Bottom_Left> <Bottom_Right> <BottomRightX /> <BottomRightY /> <BottomRightZ /> </Bottom_Right> </Screen_Corners> <Screen_Corners> <Top_Left> <TopLeftX /> <TopLeftY /> <TopLeftZ /> </Top_Left> <Bottom_Left> <BottomLeftX /> <BottomLeftY /> <BottomLeftZ /> </Bottom_Left> <Bottom_Right> <BottomRightX /> <BottomRightY /> <BottomRightZ /> </Bottom_Right> </Screen_Corners> <Screen_Corners> <Top_Left> <TopLeftX /> <TopLeftY /> <TopLeftZ /> </Top_Left> <Bottom_Left> <BottomLeftX /> <BottomLeftY /> <BottomLeftZ /> </Bottom_Left> <Bottom_Right> <BottomRightX /> <BottomRightY /> <BottomRightZ /> </Bottom_Right> </Screen_Corners> </Settings> Now, I've written XML files before using winforms, and the WriteXMLFile function is exactly the same. However, in my winform, no matter how much I press the WriteXMLFile button, it doesn't write the whole lot again. Is there something that I'm doing wrong or should change to stop this from happening?

    Read the article

  • Pygame Import Error, Python 3.2

    - by Treb Nicholas
    I'm having an issue with the Pygame module. I run Python 3.2 and installed the respective Pygame file, but now when I try to import it in the IDLE, it gives me this error: import pygame Traceback (most recent call last): File "", line 1, in import pygame File "C:\Python32\lib\site-packages\pygame__init__.py", line 95, in from pygame.base import * ImportError: DLL load failed: %1 is not a valid Win32 application. Any help will be appreciated.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • What can i use as a 3d Tile map editor?

    - by alfa64
    I need to make grid based levels with 3d models for a dungeon crawler ( as a recent example Legend of Grimrock), but i need to have several layers and place entities with properties and position, angle, etc. I was considering Tiled, using layers as height for each level, but it's very hard to work with and visualize. What can i use for this pourpose? The output format needs to be json, xml, or something i can use on my engine. Ideally i'd want something like Tiled with a 3d visualization/edit mode and support for loading models or at least some visual representation of them.

    Read the article

  • How to draw image in memory manually in pyglet?

    - by Mossen
    In pyglet, I want to create an image buffer in memory, then set the bytes manually, then draw it. I tried making a 3x3 red square like this in my draw() function: imageData = pyglet.image.ImageData(3, 3, 'RGB', [1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0 ]) imageData.blit(10, 10) ...but at runtime, Python complains: ctypes.ArgumentError: argument 9: <type 'exceptions.TypeError'>: wrong type Is this the right approach? Am I missing a step? How can I fix this?

    Read the article

  • How to rotate a set of points on z = 0 plane in 3-D, preserving pairwise distances?

    - by cagirici
    I have a set of points double n[] on the plane z = 0. And I have another set of points double[] m on the plane ax + by + cz + d = 0. Length of n is equal to length of m. Also, euclidean distance between n[i] and n[j] is equal to euclidean distance between m[i] and m[j]. I want to rotate n[] in 3-D, such that for all i, n[i] = m[i] would be true. In other words, I want to turn a plane into another plane, preserving the pairwise distances. Here's my code in java. But it does not help so much: double[] rotate(double[] point, double[] currentEquation, double[] targetEquation) { double[] currentNormal = new double[]{currentEquation[0], currentEquation[1], currentEquation[2]}; double[] targetNormal = new double[]{targetEquation[0], targetEquation[1], targetEquation[2]}; targetNormal = normalize(targetNormal); double angle = angleBetween(currentNormal, targetNormal); double[] axis = cross(targetNormal, currentNormal); double[][] R = getRotationMatrix(axis, angle); return rotated; } double[][] getRotationMatrix(double[] axis, double angle) { axis = normalize(axis); double cA = (float)Math.cos(angle); double sA = (float)Math.sin(angle); Matrix I = Matrix.identity(3, 3); Matrix a = new Matrix(axis, 3); Matrix aT = a.transpose(); Matrix a2 = a.times(aT); double[][] B = { {0, axis[2], -1*axis[1]}, {-1*axis[2], 0, axis[0]}, {axis[1], -1*axis[0], 0} }; Matrix A = new Matrix(B); Matrix R = I.minus(a2); R = R.times(cA); R = R.plus(a2); R = R.plus(A.times(sA)); return R.getArray(); } This is what I get. The point set on the right side is actually part of a point set on the left side. But they are on another plane. Here's a 2-D representation of what I try to do: There are two lines. The line on the bottom is the line I have. The line on the top is the target line. The distances are preserved (a, b and c). Edit: I have tried both methods written in answers. They both fail (I guess). Method of Martijn Courteaux public static double[][] getRotationMatrix(double[] v0, double[] v1, double[] v2, double[] u0, double[] u1, double[] u2) { RealMatrix M1 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*v0[0]}, {0,1,0,-1*v0[1]}, {0,0,1,0}, {0,0,0,1} }); RealMatrix M2 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*u0[0]}, {0,1,0,-1*u0[1]}, {0,0,1,-1*u0[2]}, {0,0,0,1} }); Vector3D imX = new Vector3D((v0[1] - v1[1])*(u2[0] - u0[0]) - (v0[1] - v2[1])*(u1[0] - u0[0]), (v0[1] - v1[1])*(u2[1] - u0[1]) - (v0[1] - v2[1])*(u1[1] - u0[1]), (v0[1] - v1[1])*(u2[2] - u0[2]) - (v0[1] - v2[1])*(u1[2] - u0[2]) ).scalarMultiply(1/((v0[0]*v1[1])-(v0[0]*v2[1])-(v1[0]*v0[1])+(v1[0]*v2[1])+(v2[0]*v0[1])-(v2[0]*v1[1]))); Vector3D imZ = new Vector3D(findEquation(u0, u1, u2)); Vector3D imY = Vector3D.crossProduct(imZ, imX); double[] imXn = imX.normalize().toArray(); double[] imYn = imY.normalize().toArray(); double[] imZn = imZ.normalize().toArray(); RealMatrix M = new Array2DRowRealMatrix(new double[][]{ {imXn[0], imXn[1], imXn[2], 0}, {imYn[0], imYn[1], imYn[2], 0}, {imZn[0], imZn[1], imZn[2], 0}, {0, 0, 0, 1} }); RealMatrix rotationMatrix = MatrixUtils.inverse(M2).multiply(M).multiply(M1); return rotationMatrix.getData(); } Method of Sam Hocevar static double[][] makeMatrix(double[] p1, double[] p2, double[] p3) { double[] v1 = normalize(difference(p2,p1)); double[] v2 = normalize(cross(difference(p3,p1), difference(p2,p1))); double[] v3 = cross(v1, v2); double[][] M = { { v1[0], v2[0], v3[0], p1[0] }, { v1[1], v2[1], v3[1], p1[1] }, { v1[2], v2[2], v3[2], p1[2] }, { 0.0, 0.0, 0.0, 1.0 } }; return M; } static double[][] createTransform(double[] A, double[] B, double[] C, double[] P, double[] Q, double[] R) { RealMatrix c = new Array2DRowRealMatrix(makeMatrix(A,B,C)); RealMatrix t = new Array2DRowRealMatrix(makeMatrix(P,Q,R)); return MatrixUtils.inverse(c).multiply(t).getData(); } The blue points are the calculated points. The black lines indicate the offset from the real position.

    Read the article

  • Beat detection, weird detection

    - by Quincy
    I made this soundanalyzer class to detect beats in songs : // put it on pastebin for the big size, will put it here if people rather want that. pastebin.com/8PdgZPP3 but for some reason its only detecting beats from 637 sec to around 641(sec) and I have no idea why. I know the beats are being inserted from multiple bands since I am finding duplicates and it seems as its assigning a beat to each instant energy value in between those values. Its modeled after this : http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf So why won't the beats properly register ?

    Read the article

  • View matrix in opengl

    - by user5584
    Hi! Sorry for my clumsy question. But I don't know where I am wrong at creating view matrix. I have the following code: createMatrix(vec4f(xAxis.x, xAxis.y, xAxis.z, dot(xAxis,eye)), vec4f( yAxis.x_, yAxis.y_, yAxis.z_, dot(yAxis,eye)), vec4f(-zAxis.x_, -zAxis.y_, -zAxis.z_, -dot(zAxis,eye)), vec4f(0, 0, 0, 1)); //column1, column2,... I have tried to transpose it, but with no success. I have also tried to use gluLookAt(...) with success. At the reference page, I watched the remarks about the to-be-created matrix, and it seems the same as mine. Where I am wrong?

    Read the article

  • Per Pixel Collision Detection

    - by CJ Cohorst
    Just a quick question, I have this collision detection code: public bool PerPixelCollision(Player player, Game1 dog) { Matrix atob = player.Transform * Matrix.Invert(dog.Transform); Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, atob); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, atob); Vector2 iBPos = Vector2.Transform(Vector2.Zero, atob); for(int deltax = 0; deltax < player.playerTexture.Width; deltax++) { Vector2 bpos = iBPos; for (int deltay = 0; deltay < player.playerTexture.Height; deltay++) { int bx = (int)bpos.X; int by = (int)bpos.Y; if (bx >= 0 && bx < dog.dogTexture.Width && by >= 0 && by < dog.dogTexture.Height) { if (player.TextureData[deltax + deltay * player.playerTexture.Width].A > 150 && dog.TextureData[bx + by * dog.Texture.Width].A > 150) { return true; } } bpos += stepY; } iBPos += stepX; } return false; } What I want to know is where to put in the code where something happens. For example, I want to put in player.playerPosition.X -= 200 just as a test, but I don't know where to put it. I tried putting it under the return true and above it, but under it, it said unreachable code, and above it nothing happened. I also tried putting it by bpos += stepY; but that didn't work either. Where do I put the code? Any help is appreciated. Thanks in advance!

    Read the article

  • 2D Rendering with OpenGL ES 2.0 on Android (matrices not working)

    - by TranquilMarmot
    So I'm trying to render two moving quads, each at different locations. My shaders are as simple as possible (vertices are only transformed by the modelview-projection matrix, there's only one color). Whenever I try and render something, I only end up with slivers of color! I've only done work with 3D rendering in OpenGL before so I'm having issues with 2D stuff. Here's my basic rendering loop, simplified a bit (I'm using the Matrix manipulation methods provided by android.opengl.Matrix and program is a custom class I created that just calls GLES20.glUniformMatrix4fv()): Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); program.setUniformMatrix4f("Projection", projection); At this point, I render the quads (this is repeated for each quad): Matrix.setIdentityM(modelview, 0); Matrix.translateM(modelview, 0, quadX, quadY, 0); program.setUniformMatrix4f("ModelView", modelview); quad.render(); // calls glDrawArrays and all I see is a sliver of the color each quad is! I'm at my wits end here, I've tried everything I can think of and I'm at the point where I'm screaming at my computer and tossing phones across the room. Anybody got any pointers? Am I using ortho wrong? I'm 100% sure I'm rendering everything at a Z value of 0. I tried using frustumM instead of orthoM, which made it so that I could see the quads but they would get totally skewed whenever they got moved, which makes sense if I correctly understand the way frustum works (it's more for 3D rendering, anyway). If it makes any difference, I defined my viewport with GLES20.glViewport(0, 0, windowWidth, windowHeight); Where windowWidth and windowHeight are the same values that are pased to orthoM It might be worth noting that the android.opengl.Matrix methods take in an offset as the second parameter so that multiple matrices can be shoved into one array, so that'w what the first 0 is for For reference, here's my vertex shader code: uniform mat4 ModelView; uniform mat4 Projection; attribute vec4 vPosition; void main() { mat4 mvp = Projection * ModelView; gl_Position = vPosition * mvp; } I tried swapping Projection * ModelView with ModelView * Projection but now I just get some really funky looking shapes... EDIT Okay, I finally figured it out! (Note: Since I'm new here (longtime lurker!) I can't answer my own question for a few hours, so as soon as I can I'll move this into an actual answer to the question) I changed Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); to float ratio = windowWwidth / windowHeight; Matrix.orthoM(projection, 0, 0, ratio, 0, 1, -1, 1); I then had to scale my projection matrix to make it a lot smaller with Matrix.scaleM(projection, 0, 0.05f, 0.05f, 1.0f);. I then added an offset to the modelview translations to simulate a camera so that I could center on my action (so Matrix.translateM(modelview, 0, quadX, quadY, 0); was changed to Matrix.translateM(modelview, 0, quadX + camX, quadY + camY, 0);) Thanks for the help, all!

    Read the article

  • Texture mapping on gluDisk

    - by Marnix
    I'm trying to map a brick texture on the edge of a fountain and I'm using gluDisk for that. How can I make the right coordinates for the disk? My code looks like this and I have only found a function that takes the texture along with the camera. I want the cubic texture to be alongside of the fountain, but gluDisk does a linear mapping. How do I get a circular mapping? void Fountain::Draw() { glPushMatrix(); // push 1 this->ApplyWorldMatrixGL(); glEnable(GL_TEXTURE_2D); // enable texturing glPushMatrix(); // push 2 glRotatef(90,-1,0,0); // rotate 90 for the quadric // also drawing more here... // stone texture glBindTexture(GL_TEXTURE_2D, texIDs[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glPushMatrix(); // push 3 glTranslatef(0,0,height); // spherical texture generation // this piece of code doesn't work as I intended glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); glEnable(GL_TEXTURE_GEN_S); glEnable(GL_TEXTURE_GEN_T); GLUquadric *tub = gluNewQuadric(); gluQuadricTexture(tub, GL_TRUE); gluDisk(tub, radius, outerR, nrVertices, nrVertices); gluDeleteQuadric(tub); glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); glPopMatrix(); // pop 3 // more drawing here... glPopMatrix(); // pop 2 // more drawing here... glPopMatrix(); // pop 1 } To refine my question a bit. This is an image of what it is at default (left) and of what I want (right). The texture should fit in the border of the disk, a lot of times. If this is possible with the texture matrix, than that's fine with me as well.

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • Anti-aliasing works for debug runtime but not retail runtime

    - by DeadMG
    I'm experimenting with setting various graphical settings in my Direct3D9 application, and I'm currently facing a curious problem with anti-aliasing. When running under the debug runtime, AA works as expected, and I don't have any errors or warnings. But when running under the retail runtime, the image isn't anti-aliased at all. I don't get any errors, the device creates and executes just fine. As I honestly have little idea where the problem is, I will simply give a relatively high-level overview of the architecture involved, rather than specific problematic code. Simply put, I render my 3D content to a texture, which I then render to the back buffer. Any suggestions as to where to look?

    Read the article

  • OpenGL - Cascaded shadow mapping - Texture lookup

    - by Silverlan
    I'm trying to implement cascaded shadow mapping in my engine, but I'm somewhat stuck at the last step. For testing purposes I've made sure all cascades encompass my entire scene. The result is currently this: The different intensity of the cascades is not on purpose, it's actually the problem. This is how I do the texture lookup for the shadow maps inside the fragment shader: layout(std140) uniform CSM { vec4 csmFard; // far distances for each cascade mat4 csmVP[4]; // View-Projection Matrix int numCascades; // Number of cascades to use. In this example it's 4. }; uniform sampler2DArrayShadow csmTextureArray; // The 4 shadow maps in vec4 csmPos[4]; // Vertex position in shadow MVP space float GetShadowCoefficient() { int index = numCascades -1; vec4 shadowCoord; for(int i=0;i<numCascades;i++) { if(gl_FragCoord.z < csmFard[i]) { shadowCoord = csmPos[i]; index = i; break; } } shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; return shadow2DArray(csmTextureArray,shadowCoord).x; } I then use the return value and simply multiply it with the diffuse color. That explains the different intensity of the cascades, since I'm grabbing the depth value directly from the texture. I've tried to do a depth comparison instead, but with limited success: [...] // Same code as above shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; float z = shadow2DArray(csmTextureArray,shadowCoord).x; if(z < shadowCoord.w) return 0.25f; return 1.f; } While this does give me the same shadow value everywhere, it only works for the first cascade, all others are blank: (I colored the cascades because otherwise the transitions wouldn't be visible in this case) What am I missing here?

    Read the article

  • Camera rotation flicker in OpenGL ES 2.0

    - by seahorse
    I implemented an orbit camera in my own OpenGL ES 2.0 application. I was getting extensive amount of flicker while rotating the camera using the mouse. When I added the line eglSwapInterval( ..., 0.1); after eglSwapBuffers() and then the flicker immediately stopped. I am not able to understand why eglSwapInterval solves the flicker problem? (The FPS of my app prior to eglSwapInterval was around 700FPS) (The flicker is NOT due to z-fighting because I have set near and far clip planes as 100 and 500)

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Why do my LWJGL fonts have dots and lines around them?

    - by Jordan
    When we render fonts there are weird dots and lines around the text. I have no idea why this would happen. Here is an image of what it looks like: Our font class looks like this: package me.NJ.ComputerTycoon.Font; import me.NJ.ComputerTycoon.BaseObjects.UDim2; import org.lwjgl.opengl.Display; import org.newdawn.slick.Color; import org.newdawn.slick.TrueTypeFont; public class Font { public TrueTypeFont font; private int fontSize = 18; private String fontName = "Calibri"; private int fontStyle = java.awt.Font.BOLD; public Font(String fontName, int fontStyle, int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); //font. } public Font(int fontStyle, int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public Font(int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public Font() { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public void drawString(int x, int y, String s, Color color){ this.font.drawString(x, y, s, color); } public void drawString(int x, int y, String s){ this.font.drawString(x, y, s); } public void drawString(float x, float y, String s, Color color){ this.font.drawString(x, y, s, color); } public void drawString(float x, float y, String s){ this.font.drawString(x, y, s); } public void drawString(UDim2 udim, String s){ this.font.drawString((Display.getWidth() * udim.getX().getScale()) + udim.getX().getOffset(), (Display.getHeight() * udim.getY().getScale()) + udim.getY().getOffset(), s); } public String getFontName(){ return this.fontName; } public int getFontSize(){ return this.fontSize; } public TrueTypeFont getFont(){ return this.font; } } What could be causing this?

    Read the article

  • Which K-factor should be used in case players have different K-factor values

    - by DmitryN
    I am implementing a ranking system based on the Elo rating and cannot get a point about the K-factor. If two players with different skills and, therefore, different K-factors are playing, which exactly K-factor should be used when changing their ratings? For example, player A has rating 2500 and K-factor 16 (probability ~75%) while player B has rating 2300 and K-factor 24 (probability ~25%). If player A wins, do I need to use 16 as K-factor for both players or 16 for player A and 24 for player B?

    Read the article

  • Importing an object from Blender into a scene, rotation on X axis?

    - by Arne
    This is my situation: I save the scene with blender no export with any processing steps. Blender has x right y up -z into the scene for the view coordinates (OpenGL) I have x right y up -z into the scene for the view coordinates (OpenGl) Bleneder has x/y plane and z up as world coordinates I have x/y plane and z up as world coordinates I load the mesh with assimp directly from the blend file with absolutely no post processing. The object is rotated abount p/2 on the x-axis. Why?

    Read the article

  • Generating Normal map from a Image with a given Albedo map

    - by snape
    I am working on a research problem part of which involves generating normal map from a given image of a rusted object. I searched the internet for techniques to achieve the above and apparently crazybump is mentioned a lot. I tried it but it didn't produce the desirable effects. Also I am looking for a method which draws inspiration from an existing research paper not some closed source software. I turned my attention to the technique described in the this paper. Results from this technique are satisfactory for normal objects because of bias in the training data but it doesn't work very well in the case of rusted objects. After this I focussed my attention on generating Albedo map (the above problem would become more solvable if Albedo map is obtained). Fortunately I am able to generate pretty good albedo maps for images of rusted objects. I used this paper's approach to generate Albedo maps. Now I want to know a good technique to get Normal map given an image and it's corresponding Albedo map. To give you an idea of what kind of images I am working with I am attaching a sample. Links to research material would be really appreciated. Thanks!

    Read the article

  • iPhone image asset recommended resolution/dpi/format

    - by Matthew
    I'm learning iPhone development and a friend will be doing the graphics/animation. I'll be using cocos2d most likely (if that matters). My friend wants to get started on the graphics, and I don't know what image resolution or dpi or formats are recommended. This probably depends on if something is a background vs. a small character. Also, I know I read something about using @2x in image file names to support high res iphone screens. Does cocos2d prefer a different way? Or is this not something to worry about at this point? What should I know before they start working on the graphics?

    Read the article

  • rotating model around own Y-axis XNA

    - by ChocoMan
    I'm have trouble with my model rotating around it's own Y-axis. The model is a person. When I test my world, the model is loaded at a position of 0, 0, 0. When I rotate my model from there, the model rotates like normal. The problem comes AFTER I moved the model to a new position. If I move the the model forward, left, etc, then try to rotate it on it's own Y-Axis, the model will rotate, but still around the original position in a circular manner (think of yourself swing around on a rope, but always facing outward from the center). Does anyone know how to keep the center point of rotation updated?

    Read the article

  • Get Unity to read in objects name without the need to hard code

    - by N0xus
    I'm trying to get away from having to hard code in the names of objects I want my code to use. For example, I'm use to do it this way: TextAsset test = new TextAsset(); test = (TextAsset)Resources.Load("test.txt", typeof(TextAsset)); What I want to know, is there a way to have so that when I drag my test.txt file onto my object in Unity, my code automatically gets the name of that object? I'm wanting to do this so once I write the code, I don't need to back in and change it should I wish re-use it.

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >