Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 393/774 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • What's wrong with this OpenGL model picking code?

    - by openglNewbie
    I am making simple model viewer using OpenGL. When I want to pick an object OpenGL returns nothing or an object that is in another place. This is my code: GLuint buff[1024] = {0}; GLint hits,view[4]; glSelectBuffer(1024,buff); glGetIntegerv(GL_VIEWPORT, view); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluPickMatrix(x,y,1.0,1.0,view); gluPerspective(45,(float)view[2]/(float)view[4],1.0,1500.0); glMatrixMode(GL_MODELVIEW); glRenderMode(GL_SELECT); glLoadIdentity(); //I make the same transformations for normal render glTranslatef(0, 0, -zoom); glMultMatrixf(transform.M); glInitNames(); glPushName(-1); for(int j=0;j<allNodes.size();j++) { glLoadName(allNodes.at(j)->id); allNodes.at(j)->Draw(textures); } glPopName(); glMatrixMode(GL_PROJECTION); glPopMatrix(); hits = glRenderMode(GL_RENDER);

    Read the article

  • Continuous Integration, what are the strategies to manage binary content?

    - by sebas
    Currently we are testing various configurations between Feature Branching and CI with Feature toggling. I can see there are several viable options out there for the code, but I also know that CI totally relies on the possibility to merge the code. So I wonder, how do you manage CI with binary data, like art assets? I can also see another problem: all the code can be tested before to commit, I can even validate the data before to commit, but how can I test the art?! Should I use another methodology for art content?

    Read the article

  • Rotate a vector relative to itself

    - by Paul Manta
    I have a plane defined by transform.forward and transform.right, with 0 degrees corresponding to the forward vector and positive 90 degrees to the right vector. How can I create a third vector rotated in this plane. A rotation of 0 degrees would mean the vector is identical to transform.forward, a rotation of 30 degrees would mean it forms a 30 degree angle with the forward vector. In other words, I want to rotate the forward vector relative to itself, in the plane it defines with the right vector.

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Jumping over non-stationary objects without problems ... 2-D platformer ... how could this be solved? [on hold]

    - by help bonafide pigeons
    You know this problem ... take Super Mario Bros. for example. When Mario/Luigi/etc. comes in proximity with a nearing pipe image an invisible boundary setter must prevent him from continuing forward movement. However, when you jump and move both x and y you are coordinately moving in two dimensions at an exact time. When nearing the pipe in mid-air as you are falling, i.e. implementation of gravity in the computer program "pulling" the image back down, and you do not want them to get "stuck" in both falling and moving. That problem is solved, but how about this one: The player controlling the ball object is attempting to jump and move rightwards over the non-stationary block that moves up and down. How could we measure its top and lower x+y components to determine the safest way for the ball to accurately either fall back down, or catch the ledge, or get pushed down under it, etc.?

    Read the article

  • 3d js map rendering

    - by gotha
    In the past I've done a 2D tile map using HTML, CSS and Javascript. Now I have the task of creating a 3D version using the same technologies - think of it like a space map where all planets have x/y/z positions. Currently, I have no idea to do this. Is there an existing library or something I can modify to do my job? If not, what method of rendering the map should I use? It needs to be as browser independent as possible, so I can't use webgl, flash or canvas. I'm considering plain JS & HTML or SVG (using Raphael for compatibility).

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • Java getResourceAsStream as local resource

    - by Dajgoro Labinac
    Before using LWJGL, I used the Graphic method, and there I displayed imageicons, and I had the picture file located in the resources. I used: ImageIcon tcard = new ImageIcon(this.getClass().getResource("RCA.png")); to load the image. Now when I load textures in LWJGL, I have to use absolute paths to locate the file: tcard = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("C:/RCA.png")); I tried Googling, but I didn't find anything helpful. How can I load the image from the local resources like in the first example?

    Read the article

  • Fast lighting with multple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • How can I create a VBO of a type determined at runtime?

    - by lapin
    I've written a Vbo template class to work with OpenGL. I'd like to set the type from a config file at runtime. For example: <vbo type="bump_vt" ... /> Vbo* pVbo = new Vbo(bump_vt, ...); Is there some way I can do this without a large if-else block such as: if( sType.compareTo("bump_vt") == 0 ) Vbo* pVbo = new Vbo(bump_vt, ...); else if ... I'm writing for multiple platforms in C++.

    Read the article

  • Common way to store model transformations

    - by redreggae
    I ask myself what's the best way to store the transformations in a model class. What I came up with is to store the translation and scaling in a Vector3 and the rotation in a Matrix4. On each update (frame) I multiply the 3 matrices (first build a Translation and Scaling Matrix) to get the world matrix. In this way I have no accumulated error. world = translation * scaling * rotation Another way would be to store the rotation in a quaternion but then I would have a high cost to convert to a matrix every time step. If I lerp the model I convert the rotation matrix to quaternion and then back to matrix. For speed optimization I have a dirty flag for each transformation so that I only do a matrix multiplication if necessary. world = translation if (isScaled) { world *= scaling } if (isRotated) { world *= rotation } Is this a common way or is it more common to have only one Matrix4 for all transformations? And is it better to store the rotation only as quaternion? For info: Currently I'm building a CSS3D engine in Javascript but these questions are relevant for every 3D engine.

    Read the article

  • XNA - positioning after rotation

    - by DijkeMark
    I have a turret with a 2 gunbarrels. The turret rotates towards my mouse. So far no problem. When it creates a few bullets and positions them at the end of the gun barrels. Here is the problem. It only works the moment the gun is point upwards. The moment it rotates the end of the gun barrels have moved ofcourse, thus the bullets don't spawn at the end of the gun battels, but at the place the where the gun barrels are when the turret is pointing upwards. How can I check where the end of the gun barrels are the moment it rotates? Thanks in Advance, Mark Dijkema PS. If you need code please let me know, I didn't post any yet, because I didn't what code you would need.

    Read the article

  • How much time it will take to learn 3ds Max

    - by Mirror51
    I am not a 3d developer but i want to lean 3ds max just for simple house building with 2-3 rooms. Actually i don't want to develop from scratch . What i really want to do is get the existing models of homes , rooms , hotels from the internet and add my name there or my photo there , just for fun . SO i want to know that how much time do u think it will take me to that sort of stuff. Its not my career but just hobby . If its going to take longer time , then i don't want to waste but i can get going in one week or so that will go good but i want to ask from experience developers thanks

    Read the article

  • Proper way to encapsulate a Shader into different modules

    - by y7haar
    I am planning to build a Shader system which can be accessed through different components/modules in C++. Each component has its own functionality like transform-relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example: I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture-handler, toon-shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms/attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON/XML and parse that file in C++ to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.

    Read the article

  • Why does unity obj import flip my x coordinate?

    - by milkplus
    When I import my wavefront obj model into unity and then draw lines over it with the same coordinates in the obj file, the x coordinate is negated. I don't see any option in the importer that might be doing that. And I'm using the same localToWorldMatrix and the same coordinate data in the .obj file. Hmmm GL.PushMatrix(); GL.MultMatrix(transform.localToWorldMatrix); CreateMaterial(); lineMaterial.SetPass(0); GL.Color(new Color(0, 1, 0)); GL.Begin(GL.LINES); GL.Vertex(p1); GL.Vertex(p2); GL.Vertex(p2); GL.Vertex(p3); //... GL.End(); GL.PopMatrix();

    Read the article

  • XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4

    - by danbystrom
    Beginner question. Synopsis: my water effects does something that causes the drawing of my sky sphere to throw an exeption when run in full screen. The exception is: XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4. This happens both when I start in full screen directly or switch to full screen from windowed. It does NOT happen, however, if I comment out the drawing of my water. So, what in my water effect can possibly cause the drawing of my sky sphere to choke???

    Read the article

  • How to achieve 'forward' movement (into the screen) using Cocos2D?

    - by lemikegao
    I'm interested in creating a 2.5D first-person shooter (like Doom) and I currently don't understand how to implement the player moving forward. The player will also be able to browse around the world (left, right, up, down) via gyroscope control. I plan to only use 2D sprites and no 3D models. My first attempt was to increase the scale of layers to make it appear as if the player was moving toward the objects but I'm not sure how to make it seem as if the player is passing around the objects (instead of running into them). If there are extensions that I should take a look at (like Cocos3D), please let me know. Thanks for the help! Note: I've only created 2D games so was hoping to get guided into the right direction

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Collision: Vector class (java)

    - by user8363
    When handling collision detection / response and you need a Vector class, do you need to create that class yourself or is there a java class you can use? A vector class should have methods like: subtract(Vector v), normalize(), dotProduct(Vector v), ... At the moment it seems logical to use classes like java.awt.Rectangle and java.awt.Polygon to calculate collisions. Would I be right to use these classes for this purpose? My question is not about how to implement collision detection, I know how that works. However I'm wondering what would be a correct and clean way to implement it in java since I'm fairly new to the language and to application development in general.

    Read the article

  • Why is my primitive xna square not drawn/shown?

    - by Mech0z
    I have made this class to draw a rectangle, but I cant get it to be drawn, I have no issues displaying a 3d model created in 3dmax, but shown these primitives seems much harder I use this to create it board = new Board(Vector3.Zero, 1000, 1000, Color.Yellow); And here is the implementation using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Shapes; using Quadro.Models; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Quadro { public class Board : IGraphicObject { //Private Fields private Vector3 modelPosition; private BasicEffect effect; private VertexPositionColor[] vertices; private Matrix rotationMatrix; private GraphicsDevice graphicsDevice; private Matrix cameraProjection; //Constructor public Board(Vector3 position, float length, float width, Color color) { var _color = color; vertices = new VertexPositionColor[6]; vertices[0].Position = new Vector3(position.X, position.Y, position.Z); vertices[1].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[2].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[3].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[4].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[5].Position = new Vector3(position.X + length, position.Y + width, position.Z); for(int i = 0; i < vertices.Length; i++) { vertices[i].Color = color; } initFields(); } private void initFields() { graphicsDevice = SharedGraphicsDeviceManager.Current.GraphicsDevice; effect = new BasicEffect(graphicsDevice); modelPosition = Vector3.Zero; float screenWidth = (float)graphicsDevice.Viewport.Width; float screenHeight = (float)graphicsDevice.Viewport.Height; float aspectRatio = screenWidth / screenHeight; this.cameraProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); this.rotationMatrix = Matrix.Identity; } //Public Methods public void Update(GameTimerEventArgs e) { } public void Draw(Vector3 cameraPosition, GameTimerEventArgs e) { Matrix cameraView = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); effect.World = rotationMatrix * Matrix.CreateTranslation(modelPosition); effect.View = cameraView; effect.Projection = cameraProjection; graphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2, VertexPositionColor.VertexDeclaration); } } public void Rotate(Matrix rotationMatrix) { this.rotationMatrix = rotationMatrix; } public void Move(Vector3 moveVector) { this.modelPosition += moveVector; } } }

    Read the article

  • How to configure background image to be at the bottom OpenGL Android

    - by Maxim Shoustin
    I have class that draws white line: public class Line { //private FloatBuffer vertexBuffer; private FloatBuffer frameVertices; ByteBuffer diagIndices; float[] vertices = { -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; public Line(GL10 gl) { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer frameVertices = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices frameVertices.put(vertices); // set the cursor position to the beginning of the buffer frameVertices.position(0); } /** The draw method for the triangle with the GL context */ public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, frameVertices); gl.glColor4f(1.0f, 1.0f, 1.0f, 1f); gl.glDrawArrays(GL10.GL_LINE_LOOP , 0, vertices.length / 3); gl.glLineWidth(5.0f); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } It works fine. The problem is: When I add BG image, I don't see the line glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); glView.setRenderer(new mainRenderer(this)); // Use a custom renderer glView.setBackgroundResource(R.drawable.bg_day); // <- BG glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); glView.getHolder().setFormat(PixelFormat.TRANSLUCENT); How to get rid of that?

    Read the article

  • Using OpenCl to jiggle the Pipe

    - by TOAOGG
    I've got the Idea to use OpenCL to program a simple Renderer. A clear contra is, that this approach won't benefit from the hardware as the functions on the device (I think). Would it be useful to do this in OpenCL..lets say we want to Cull as early as possible so we won't have many per vertex operations. Is it correct, that Culling is done after the Vertex-Shader? For static-vertecies who won't get effected by the shader it could be interesting to cull them before. Another idea would be an deferred renderer. So the main question is: Would it make sense to program a renderer in OpenCL (aside the effort)? The resulting picture would be drawn in OpenGL.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >