Search Results

Search found 19182 results on 768 pages for 'game engine'.

Page 411/768 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • OpenGL ES rotate texture

    - by 0xSina
    I just got started with OpenGL ES... I have a fragment: const char * sFragment = _STRINGIFY( varying highp vec2 coordinate; precision mediump float; uniform vec4 maskC; uniform float threshold; uniform sampler2D videoframe; uniform sampler2D videosprite; uniform vec4 mask; uniform vec4 maskB; uniform int recording; vec3 normalize(vec3 color, float meanr) { return color*vec3(0.75 + meanr, 1., 1. - meanr); } void main() { float d; float dB; float dC; float meanr; float meanrB; float meanrC; float minD; vec4 pixelColor; vec4 spriteColor; pixelColor = texture2D(videoframe, coordinate); spriteColor = texture2D(videosprite, coordinate); meanr = (pixelColor.r + mask.r)/8.; meanrB = (pixelColor.r + maskB.r)/8.; meanrC = (pixelColor.r + maskC.r)/8.; d = distance(normalize(pixelColor.rgb, meanr), normalize(mask.rgb, meanr)); dB = distance(normalize(pixelColor.rgb, meanrB), normalize(maskB.rgb, meanrB)); dC = distance(normalize(pixelColor.rgb, meanrC), normalize(maskC.rgb, meanrC)); minD = min(d, dB); minD = min(minD, dC); gl_FragColor = spriteColor; if (minD > threshold) { gl_FragColor = pixelColor; } } Now, depending on wether recording is 0 or 1, I want to rotate uniform sampler2D videosprite 180 degrees (reflection in x-axis, flip vertically). How can I do that? I found the function glRotatef(), but how do i specify that I want to rotate video sprite and not videoframe? Thanks

    Read the article

  • OpenGL Learning Material (that's up to date)

    - by Sauron
    So im sure there are topics on this, but alot of them list older material. And the last book: http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617/ref=sr_1_1?ie=UTF8&qid=1346116133&sr=8-1&keywords=opengl REALLY REALLY disappointed me. I DO NOT want to use someone else's library to learn this stuff, that bothers me SOOO much. So I was hoping there was a newer book that goes into detail, and doesn't use some sort of library "Hiding" everything from you. Or should I just look at older material? If so....anything thats not "too" out of date. Terrain tutorials are a plus (that's kinda my "goal"). Thanks

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • OUYA and Unity set up problems

    - by Atkobeau
    I'm having trouble with the Unity / OUYA plugin. I'm using Unity 4 with the latest update on a Windows 7 machine. When I open the starter kit and try to compile the plugin I get the following error: Picked up _JAVA_OPTIONS: -Xmx512M And if I try to Build and Run I get this error: Error building Player: ArgumentException: Illegal characters in path. I'm stumped, I've gone through lots of forum posts here and on stackoverflow and I can't seem to resolve it. My environment variables look like this: PATH - C:\Users\dave\Documents\adt-bundle-windows-x86_64-20130219\sdk\tools; C:\Users\dave\Documents\adt-bundle-windows-x86_64-20130219\sdk\platform-tools\ JAVA_HOME - C:\Program Files (x86)\Java\jdk1.6.0_45\ Everything in the OUYA Panel is white Any ideas?

    Read the article

  • Trying to figure out SDL pixel manipulation?

    - by NoobScratcher
    Hello so I've found code that plots a pixel in an SDL Screen Surface : void putpixels(int x, int y, int color) { unsigned int *ptr = (unsigned int*)Screen->pixels; int lineoffset = y * (Screen->pitch / 4 ); ptr[lineoffset + x ] = color; } But I have no idea what its actually doing here this is my thoughts. You make an unsigned integer to hold the unsigned int version of pixels then you make another integer to hold the line offset and it equals to multiply by pitch which is then divided by 4 ... Now why am I dividing it by 4 and what is the pitch and why do I multiply it?? Why must I change the lineoffset and add it to the x value then equal it to colors? I'm soo confused.. ;/ I found this function here - http://sol.gfxile.net/gp/ch02.html

    Read the article

  • Impact of variable-length loops on GPU shaders

    - by Will
    Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular: This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable-length loop affect shader performance? Imagine the simple ray-marching psuedocode: t = 0.f; while(t < maxDist) { p = rayStart + rayDir * t; d = DistanceFunc(p); t += d; if(d < epsilon) { ... emit p return; } } How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised?

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • GLSL billboard move center of rotation

    - by Jacob Kofoed
    I have successfully set up a billboard shader that works, it can take in a quad and rotate it so it always points toward the screen. I am using this vertex-shader: void main(){ vec4 tmpPos = (MVP * bufferMatrix * vec4(0.0, 0.0, 0.0, 1.0)) + (MV * vec4( vertexPosition.x * 1.0 * bufferMatrix[0][0], vertexPosition.y * 1.0 * bufferMatrix[1][1], vertexPosition.z * 1.0 * bufferMatrix[2][2], 0.0) ); UV = UVOffset + vertexUV * UVScale; gl_Position = tmpPos; BufferMatrix is the model-matrix, it is an attribute to support Instance-drawing. The problem is best explained through pictures: This is the start position of the camera: And this is the position, looking in from 45 degree to the right: Obviously, as each character is it's own quad, the shader rotates each one around their own center towards the camera. What I in fact want is for them to rotate around a shared center, how would I do this? What I have been trying to do this far is: mat4 translation = mat4(1.0); translation = glm::translate(translation, vec3(pos)*1.f * 2.f); translation = glm::scale(translation, vec3(scale, 1.f)); translation = glm::translate(translation, vec3(anchorPoint - pos) / vec3(scale, 1.f)); Where the translation is the bufferMatrix sent to the shader. What I am trying to do is offset the center, but this might not be possible with a single matrix..? I am interested in a solution that doesn't require CPU calculations each frame, but rather set it up once and then let the shader do the billboard rotation. I realize there's many different solutions, like merging all the quads together, but I would first like to know if the approach with offsetting the center is possible. If it all seems a bit confusing, it's because I'm a little confused myself.

    Read the article

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • How to get warnings when compiling fx files

    - by jdv-Jan de Vaan
    When I compile DirectX shaders (.fx files), I dont see any compiler warnings unless there was an error in the effect. This happens both when using the offline FXC compiler, as well as calling SlimDx's CompileEffect (which is what we normally do). I could force warnings as errors (/WX), but if you enable that, you get an error that compilation failed, without the warning that caused the problem. So how can I output warnings for shaders that compile properly?

    Read the article

  • Starting out with OpenGL when most tutorials are out of date

    - by AUTO
    I'm sure there are already a bunch of questions like this asked, but the constant updating of the OpenGL library throws them all away, and in a month or two, the answers here will be worthless again. I am ready to start programming in OpenGL using C++. I've got a working compiler (DevCpp; do NOT ask me to switch to VC++, and don't ask me why). Now I'm just looking for a solid tutorial on how to program with OpenGL. My assistant found the tutorial provided by NeHe Productions, but as I've come to find out, it's WAY OUT OF DATE! (although I did pull together a basic window to support an OpenGL canvas) Then I went online, and found the OpenGL SuperBible, which apparently uses freeglut? But what I'd like to know is whether or not SuperBible 5th edition is up to date any longer. The suggestion to freeglut I found said the latest version was 2.6.0 but now it's 2.8.0! Is the OpenGL SuperBible still a good, and fairly up-to-date place to start? Is there a better place to go to learn OpenGL? Am I allowed to simply store freeglut in the DevCpp include directory (maybe in GL), or is there some important procedure? Are there any comments or suggestions that I didn't think to ask since I'm only just beginning? @dreta cleared some things up for me, so now I have a better idea of what to ask: I think I'd like to start out with OpenGL using a wrapper library instead of directly accessing OpenGL.I just think that, for a beginner, it would be easier for me to program and get good results, while I don't yet have to understand all the grimy details (as @stephelton mentioned). The problem is, I can't find any library that doesn't have undefined references to no longer supported functions. Freeglut sounds operational, but it still uses GLU.Does anyone know what I can do?Also, I tried compiling the first SuperBible's source, but I got errors since GLAPI is not being defined as a type, the error originating in the GLU library. I'd like to use the SuperBible, but I don't know how to fix this.

    Read the article

  • Comparing angles and working out the difference

    - by Thomas O
    I want to compare angles and get an idea of the distance between them. For this application, I'm working in degrees, but it would also work for radians and grads. The problem with angles is that they depend on modular arithmetic, i.e. 0-360 degrees. Say one angle is at 15 degrees and one is at 45. The difference is 30 degrees, and the 45 degree angle is greater than the 15 degree one. But, this breaks down when you have, say, 345 degrees and 30 degrees. Although they compare properly, the difference between them is 315 degrees instead of the correct 45 degrees. How can I solve this? I could write algorithmic code: if(angle1 > angle2) delta_theta = 360 - angle2 - angle1; else delta_theta = angle2 - angle1; But I'd prefer a solution that avoids compares/branches, and relies entirely on arithmetic.

    Read the article

  • how to label a cuboid using open gl?

    - by usha
    hi this is how my 3dcuboid looks ,i have attached complete code , i want to label this cuboid using different name across sides how is it possible using opengl in android...plz help me out public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Logic that can traverse all possible layouts, but not checking every combination of identical pieces?

    - by George Bailey
    Suppose we have a grid of arbitrary size, which is filled by blocks of various widths and heights. There are many 2x2 blocks (meaning they take a total of 4 cells in the grid) and many 3x3 blocks, as well as some 5x4, 4x5, 2x3, etc. I was hoping I could set up a program that would look at all possible layouts, and rank them, and find the best one. Simply it would look at all possible positions of these blocks, and see what setup is the best rank. (the rank based on how many of these can be connected by a roadway system of 1x1 road blocks, and how many squares can be left empty after this is done. - wanting to fit the most blocks as possible with the least roads.) My question, is how should I traverse all the possibilities? I could take all the blocks and try them one at a time, but since all 2x2 blocks are equal, and there are a couple dozen of them, there is no point in trying every combination there, as in the following AA BBB AA BBB CCBBB CCEEE DD EEE DD EEE is exactly the same as CC EEE CC EEE AAEEE AABBB DD BBB DD BBB You notice that there are 2 3x3 blocks and 3 2x2 blocks in my two examples. Based on the model I have now, the computer would try both of these combinations, as well as many others. The problem is that it is going to try every single possible variation of my couple dozen 2x2 blocks. And that is sorely inefficient. Is there a reasonable way to take out this duplicated work, somehow getting the computer program to treat all 2x2 blocks as equal/identical, instead of one requiring rearranging/swapping of these identical blocks? Can this be done?

    Read the article

  • How can I generate a texture that looks like left-over tea leaves?

    - by Jedidja
    We are working on a project for iPhone and Windows Phone 7 where we'd like to be able to generate tea leaves at the bottom of a cup. It doesn't have to look photo-realistic, and actually cartoon-y is ok. What sort of techniques should we research to accomplish this? Are there any libraries (preferably in C, but we can translate) that would be helpful? Here are some samples pulled from a Google Image search

    Read the article

  • Kinect Hand tracking in xna

    - by N0xus
    I'm trying to create an application using the Kinect to simulate the following project: Kinect Hand Tracking I want my project to have similar usability with the Kinect tracking hand and finger positions for use in a menu system, or to navigate another system. What I would like to know is; is it possible for the exact same to be accomplished in XNA using Kinect? I know that it can be done in Winform / C#, but I know XNA / C# a lot better and would (ideally) prefer to use that.

    Read the article

  • Explicit resource loading in Ogre (Mogre)

    - by sebf
    I am just starting to learn Mogre and what I would like to do is to be able to load resources 'explicitly' (i.e. I just provide an absolute path instead of using a resource group tied to a directory). This is very different to manually loading resources, which I believe in Ogre has a very specific meaning, to build up the object using Ogres methods. I want to use Ogres resource management system/resource loading code, but to have finer control over which files are loaded and in what groups they are. I remember reading how to do this but cannot find the page again; I think its possible to do something like: Declare a resource group Declare the resource(s) (this is when the actual resource file name is provided) Initialise the resource group to actually load the resource(s) Is this the correct procedure? If so, is there any example code showing how to do this?

    Read the article

  • fast java2d translucency

    - by mdriesen
    I'm trying to draw a bunch of translucent circles on a Swing JComponent. This isn't exactly fast, and I was wondering if there is a way to speed it up. My custom JComponent has the following paintComponent method: public void paintComponent(Graphics g) { Rectangle view = g.getClipBounds(); VolatileImage image = createVolatileImage(view.width, view.height); Graphics2D buffer = image.createGraphics(); // translate to camera location buffer.translate(-cx, -cy); // renderables contains all currently visible objects for(Renderable r : renderables) { r.paint(buffer); } g.drawImage(image.getSnapshot(), view.x, view.y, this); } The paint method of my circles is as follows: public void paint(Graphics2D graphics) { graphics.setPaint(paint); graphics.fillOval(x, y, radius, radius); } The paint is just an rgba color with a < 255: Color(int r, int g, int b, int a) It works fast enough for opaque objects, but is there a simple way to speed this up for translucent ones?

    Read the article

  • Error loading PCX image in FreeImage library

    - by khanhhh89
    I'm using FreeImage in C++ for loading texuture from the PCX image. My FreeImage code is as following: FREE_IMAGE_FORMAT fif = FIF_UNKNOWN; //pointer to the image data BYTE* bits(0); fif = FreeImage_GetFileType(m_fileName.c_str(), 0); if (FreeImage_FIFSupportsReading(fif)) dib = FreeImage_Load(fif, m_fileName.c_str()); //retrieve the image data bits = FreeImage_GetBits(dib); //get the image width and height width = FreeImage_GetWidth(dib); height = FreeImage_GetHeight(dib); My problem is the width and height variable are both 512, while the bits array is an empty string, which make the following OPENGL call corrupt: glTexImage2D(m_textureTarget, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, bits); While debugging, I notice that the "fif" variable (which contains the format of the image) is JPEG, while the Image is actually PCX. I wonder whether or not the FreeImage recognize the wrong format (from PCX to JPEG), so tha the bits array is an empty string. I hope to see your explanation about this problem. Thanks so much

    Read the article

  • Workaround the flip queue (AKA pre-rendered frames) in OpenGL?

    - by user41500
    It appears that some drivers implement a "flip queue" such that, even with vsync enabled, the first few calls to swap buffers return immediately (queuing those frames for later use). It is only after this queue is filled that buffer swaps will block to synchronize with vblank. This behavior is detrimental to my application. It creates latency. Does anyone know of a way to disable it or a workaround for dealing with it? The OpenGL Wiki on Swap Interval suggests a call to glFinish after the swap but I've had no such luck with that trick.

    Read the article

  • Circle to Circle collision, checking each circle against all others

    - by user14861
    I'm currently coding a little circle to circle collision demo but I've got a little stuck. I think I currently have the code to detect collision but I'm not sure how to loop through my list of circles and check them off against one another. Collision check code: public static Vector2 GetIntersectionDepth(Circle a, Circle b) { float xValue = a.Center.X - b.Center.X; float yValue = a.Center.Y - b.Center.Y; Vector2 depth = Vector2.Zero; float distance = Vector2.Distance(a.Center, b.Center); if (a.Radius + b.Radius > distance) { float result = (a.Radius + b.Radius) - distance; depth.X = (float)Math.Cos(result); depth.Y = (float)Math.Sin(result); } return depth; } Loop through code: Vector2 depth = Vector2.Zero; for (int i = 0; i < bounds.Count; i++) for (int j = i+1; j < bounds.Count; j++) { depth = CircleToCircleIntersection.GetIntersectionDepth(bounds[i], bounds[j]); } Clearly I'm a complete newbie, wondering if anyone can give any suggestions or point out my errors, thanks in advance. :)

    Read the article

  • Is it normal to these Xcode prompts/errors when you deploy to IOS Simulator from Unity?

    - by Greg
    Just trying out the IOS build process.... Is it normal to see: Q1 - "upgrade to latest project format - project currently in Xcode 3.1 format, this will upgrade to 3.2" - just click OK and let Xcode do it's stuff? Q2 - same as Q1 but this time for the message "Remove obsolete build settings - will remove the build setting PREBINDING" Q3 - also when deploying to "Lastest IOS Simulator" you get the Simulator target produced, but also a non-simulator target which has lots of errors. So I assume you just ignore this target and not use it in Xcode correct? (i.e. just use the simulator target that is produced) Q4 - get a lot of warning after the simulator target is built? program works ok however.... Images For Q1 and Q2: For Q4: Settings used in Unity: Errors I see in XCode:

    Read the article

  • Is it safe StringToHash() to use in Unity?

    - by Sebastian Krysmanski
    I'm currently browsing through the Unity tutorials and saw that they're recommending to use Animator.StringToHash("some string") to created unique ids for animation properties (see here). Since I'm a programmer, to me the word "hash" doesn't represents something unique. Like the Java documentation for hashValue() states: It is not required that if two objects are unequal [...], then calling the hashCode method on each of the two objects must produce distinct integer results. So, according to this (and my definition of "hash"), two strings may have the same hash value. (You can also argue that there are an infinite number of possible strings but only 2^32 possible int values.) So, is there a possibility that StringToHash() will give me an id that actually belongs to another property (than the one I requested the hash for)?

    Read the article

  • why is glVertexAttribDivisor crashing?

    - by 2am
    I am trying to render some trees with instancing. This is rather weird, but before sleeping yesterday night, I checked the code, and it was in a running state, when I got up this morning, it is crashing when I am calling glVertexAttribDivisor I haven't changed any code since yesterday. Here is how I am sending data to GPU for instancing. glGenBuffers(1, &iVBO); glBindBuffer(GL_ARRAY_BUFFER, iVBO); glBufferData(GL_ARRAY_BUFFER, (ml_instance->i_positions.size()*sizeof(glm::vec4)) , NULL, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, 0, (ml_instance->i_positions.size()*sizeof(glm::vec4)), &ml_instance->i_positions[0]); And then in vertex specification-- glBindBuffer(GL_ARRAY_BUFFER, iVBO); glVertexAttribPointer(i_positions, 4, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(i_positions); glVertexAttribDivisor(i_positions,1); // **THIS IS WHERE THE PROGRAM CRASHES** glDrawElementsInstanced(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0,TREES_INSTANCE_COUNT); I have checked ml_instance->i_positions, it has all the data that needs to render. I have checked the value of i_positions in vertex shader, it is the same as whatever I have defined there. I am little out of ideas here, everything looks pretty much fine. What am I missing?

    Read the article

  • Why can't I create direct3d objects?

    - by quakkels
    I've been programming professionally for years using languages like VBScript, JavaScript, and C#. As a hobby, I'm getting into some c/c++ and games programming with DirectX. I am running into an issue where I cannot create direct3d objects. I am using Visual C++ 2010 Express. After I installed vc++2010express I then installed the June 2010 release of DirectX. I am trying to include DirectX via #pragma statements. This is the code I have so far in my winmain.cpp source file: #include <Windows.h> #include <d3d11.h> #include <time.h> #include <iostream> using namespace std; #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") // program settings const string AppTitle = "Direct3D in a Window"; const int ScreenWidth = 1024; const int ScreenHeight = 768; // direct3d objects LPDIRECT3D11 d3d = NULL; // this line is showing an error The type LPDIRECT3D11 is showing an error: Error: Identifier "LPDIRECT3D11" is undefined Am I missing something here to get VC++2010Express to recognize and load the DirectX libs? Thanks for any help.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >