Search Results

Search found 21057 results on 843 pages for 'video game consoles'.

Page 485/843 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics, and I'd like to know the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have a 2D png for players' placeholders, and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However, it would be a waste of memory and thus less efficient. What do you suggest?

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • How to create Button/Switch-Like Tile where you can step on it and change its value?

    - by aldroid16
    If the player step on Button-Tile when its true, it become false. If the player step on Button-Tile when its false, it become true. The problem is, when the player stand on (intersect) the Button-Tile, it will keep updating the condition. So, from true, it become false. Because its false and player intersect on it, it become true again. True-false-true-false and so on. I use ElapsedGameTime to make the updating process slower, and player can have a chance to change the Button to true or false. However, its not a solution I was looking for. Is there any other way to make it keep in False/True condition while the Player standing on it (The Button tile) ?

    Read the article

  • OpenXDK Questions

    - by iamcreasy
    I was strolling around XBox development. Apart form buying a DevKit from Microsoft, another thing got my attention is called, OpenXDK which stands for Open XBox Development Kit. From their main site its pretty obvious that there hasn't been any update since 2005 but digging a little deeper, I found that in their project repository is was being updated. Last time stamp was 2009-02-15. Quick google search said, its not actually really on a good place to poke around. Many and MANY features are absent. Being a hobby project I perfectly understand. But, those results are quite old. The question is, is there anybody who has any experience with OpenXDK? If is, that is it possible to shade some light on this? about its limitations? Is this a mature project? How's the latest version and what's it capable of doing? Or should I just stay away from it?

    Read the article

  • OutOfBounds Exception when creating a PolygonShape using jbox2d

    - by B3nGr33ni3r
    So here's the deal, i'm parsing a file that contains the vertices for a polygon, that i want to create in box2d. I create a new PolygonShape() and then call .set() giving it a defined array of Vec, and that defined array's .length property. I expected this to work, since the documentation for jbox2d says this method takes a Vec array, and the count of Vec objects in that array. However, it errors out, and it seems to be unrelated to my code. The error i get is Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 8 at org.jbox2d.collision.shapes.PolygonShape.set(PolygonShape.java:174) and, upon looking at that line in the jbox2d svn repository, i still cannot figure out the issue. Any help is appreciated!

    Read the article

  • Alternatives to voxel-based terrain

    - by Neomex
    Are there any alternatives to voxel based terrains? Such terrain should be fully destructable, allow for arches, overhangs, preserve sharp features where needed and keep consistent topology. Maybe you can explain the problem that makes you ask this question? Voxel based terrain is basically just using a 3D grid of data to store data. There are lots of ways to render that data, but it doesn't get much simpler for storing it. – Byte56 Current isosurface extraction methods aren't most effective/bug-free. Cubical Marching Squares seem to solve most of the issues, however it is a relatively new method and there aren't too many resources about it. (I've found single university paper) Even if we stick to CMS, when we want to add multi-material support, we can either divide surface into multiple meshes, or pass a texture array or texture atlas to shaders, then we are limited to set amount of textures and additionally increase memory-usage alot.

    Read the article

  • How to determine collision direction between two rectangles?

    - by Jon
    I am trying to figure out how to determine the direction a collision occurs between two rectangles. One rectangle does not move. The other rectangle has a velocity in any direction. When a collision occurs, I want to be able to set the position of the moving rectangle to the point of impact. I seem to be stuck in determining from what direction the impact occurs. If I am moving strictly vertically or horizontally I manage great detection. But when moving in both directions at the same time, strange things happen. What is the best way to determine what direction a collision occurs between two rectangles?

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • Using OpenCl to jiggle the Pipe

    - by TOAOGG
    I've got the Idea to use OpenCL to program a simple Renderer. A clear contra is, that this approach won't benefit from the hardware as the functions on the device (I think). Would it be useful to do this in OpenCL..lets say we want to Cull as early as possible so we won't have many per vertex operations. Is it correct, that Culling is done after the Vertex-Shader? For static-vertecies who won't get effected by the shader it could be interesting to cull them before. Another idea would be an deferred renderer. So the main question is: Would it make sense to program a renderer in OpenCL (aside the effort)? The resulting picture would be drawn in OpenGL.

    Read the article

  • OGRE 3D: How to create very basic gameworld [on hold]

    - by skiwi
    I'm considering trying around to create an FPS (First person shooter), using the Ogre 3D engine. I have done the Basic Tutorials (except CEGUI), and have read through the Intermediate Tutorial, I understand some of the more advanced concepts, but I'm stuck with very simple concepts. First of all: I would want to use some tiles (square ones, with relative little height) as the floor, I guess I need to set up a loop to get those tiles done. But how would I go about creating those tiles exactly? Like making it to be their own mesh, and then I would need to find some texture. Secondly: I guess I can derive the camera and movement functions from the basic tutorial. But I'll be needing a "soldier" (anything does for now), what is the best way to create a moderately decent looking soldier? (Or obtain a decent one from an open library?) And thirdly: How can I ensure that the soldier is actually walking on the ground, instead of mid air? Will raycasting into the ground + adjust position based on that, suffice?

    Read the article

  • Playing a death anim on an enemy that I want to remove

    - by Max
    I've been trying to find a tutorial on how to best make animations in Android. I already have some animations for my enemies and my character that are controlled by rectangles and changing rectangleframe between updates using a picture like this: When I'm shooting my enemies they lose HP, and when their HP == 0 they get removed. As long as I'm using an arrayList (which I do for all enemies and bullets) I'm fine, since I can just use list.remove(i). But when I'm on a boss-level and the Boss's HP == 0, I want to remove him and play an animation of an explosion of stars before the "End-screen". Is there a preferred way to do temporary animations like this? If you can give me an example or redirect me to a tutorial, I'd be really grateful!

    Read the article

  • Calculating angle between 2 vectors

    - by Error 454
    I am working on some movement AI where there are no obstacles and movement is restricted to the XY plane. I am calculating 2 vectors: v - the direction of ship 1 w - the vector pointing from the position of ship 1 to ship 2 I am then calculating the angle between these two vectors using the standard formula: arccos( v . w / ( |v| |w| ) ) The problem I'm having is the nature of arccos only returning values between 0 and 180. This makes it impossible to determine whether I should turn left or right to face the other ship. Is there a better way to do this?

    Read the article

  • Save Zone Implementation in Asteroids

    - by Moaz
    I would like to implement a safe zone for asteroids so that when the ship gets destroyed, it shouldn't be there unless it is safe from other asteroids. I tried to check the distance between each asteroid and the ship, and if it is above threshold, it sets a flag to the ship that's a safe zone, but sometimes it work and sometimes it doesn't for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); ) { if(currentShip.m_state == Ship::Ship_Dead) { float distance = itr_astroid->getCenter().distance(Vec2f(getWindowWidth()/2,getWindowHeight()/2)); if( distance>200) { currentShip.m_saveField = true; break; } else { currentShip.m_saveField = false; itr_astroid++; } } else { itr_astroid++; } }

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • How Would I create alternate players (Turn base Event)

    - by Blue
    The picture above shows 2 players. Each containing 3 characters. I want to know how to make a Turn based event starting with player 1 alternating turns with player 2. And in every alternation each character gets a turn. If a character dies, the next character on the same team goes, and so on. How would I create this? Is there a tutorial? I haven't made any turn-based games so I don't know how to program these kinds of stuff.

    Read the article

  • XNA- Texturing problem, exporting model

    - by user1806687
    How,can I disable coloring when texture is applied, because right now the texture is being colored over. foreach (BasicEffect effect in mesh.Effects) { effect.TextureEnabled = textureApplied; if(textureApplied) effect.Texture = Texture2D.FromStream(GraphicsDevice,System.IO.File.OpenRead(texturePath)); else { effect.DiffuseColor = ti.DiffuseColor; effect.EmissiveColor = ti.EmissiveColor; } ... } mesh.Draw(); Also, is there any easy way for xna to export models? Or do I have to write my own?

    Read the article

  • XNA- Transforming children

    - by user1806687
    So, I have a Model stored in MyModel, that is made from three meshes. If you loop thrue MyModel.Meshes the first two are children of the third one. And was just wondering, if anyone could tell me where is the problem with my code. This method is called whenever I want to programmaticly change the position of a whole model: public void ChangePosition(Vector3 newPos) { Position = newPos; MyModel.Root.Transform = Matrix.CreateScale(VectorMathHelper.VectorMath(CurrentSize, DefaultSize, '/')) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Up, MathHelper.ToRadians(Rotation.Y)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Right, MathHelper.ToRadians(Rotation.X)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Forward, MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(Position); Matrix[] transforms = new Matrix[MyModel.Bones.Count]; MyModel.CopyAbsoluteBoneTransformsTo(transforms); int count = transforms.Length - 1; foreach (ModelMesh mesh in MyModel.Meshes) { mesh.ParentBone.Transform = transforms[count]; count--; } } This is the draw method: foreach (ModelMesh mesh in MyModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.View = camera.view; effect.Projection = camera.projection; effect.World = mesh.ParentBone.Transform; effect.EnableDefaultLighting(); } mesh.Draw(); } The thing is when I call ChangePosition() the first time everything works perfectlly, but as soon as I call it again and again. The first two meshes(children meshes) start to move away from the parent mesh. Another thing I wanted to ask, if I change the scale/rotation/position of a children mesh, and then do CopyAbsoluteBoneTransforms() will children meshes be positioned properlly(at the proper distance) or would achieving that require more math/methods? Thanks in advance

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • Continuous Integration, what are the strategies to manage binary content?

    - by sebas
    Currently we are testing various configurations between Feature Branching and CI with Feature toggling. I can see there are several viable options out there for the code, but I also know that CI totally relies on the possibility to merge the code. So I wonder, how do you manage CI with binary data, like art assets? I can also see another problem: all the code can be tested before to commit, I can even validate the data before to commit, but how can I test the art?! Should I use another methodology for art content?

    Read the article

  • Isometric Collision Detection

    - by Sleepy Rhino
    I am having some issues with trying to detect collision of two isometric tile. I have tried plotting the lines between each point on the tile and then checking for line intercepts however that didn't work (probably due to incorrect formula) After looking into this for awhile today I believe I am thinking to much into it and there must be a easier way. I am not looking for code just some advise on the best way to achieve detection of overlap

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4

    - by danbystrom
    Beginner question. Synopsis: my water effects does something that causes the drawing of my sky sphere to throw an exeption when run in full screen. The exception is: XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4. This happens both when I start in full screen directly or switch to full screen from windowed. It does NOT happen, however, if I comment out the drawing of my water. So, what in my water effect can possibly cause the drawing of my sky sphere to choke???

    Read the article

  • Understanding implementation of glu.PickMatrix()

    - by stoney78us
    I am working on an OpenGL project which requires object selection feature. I use OpenTK framework to do this; however OpenTK doesn't support glu.PickMatrix() method to define the picking region. I ended up googling its implementation and here is what i got: void GluPickMatrix(double x, double y, double deltax, double deltay, int[] viewport) { if (deltax <= 0 || deltay <= 0) { return; } GL.Translate((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0); GL.Scale(viewport[2] / deltax, viewport[3] / deltay, 1.0); } I totally fail to understand this piece of code. Moreover, this doesn't work with my following code sample: //selectbuffer private int[] _selectBuffer = new int[512]; private void Init() { float[] triangleVertices = new float[] { 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f }; float[] _triangleColors = new float[] { 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f }; GL.GenBuffers(2, _vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleVertices.Length), _triangleVertices, BufferUsageHint.StaticDraw); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[1]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleColors.Length), _triangleColors, BufferUsageHint.StaticDraw); GL.ColorPointer(3, ColorPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.ColorArray); //Selectbuffer set up GL.SelectBuffer(512, _selectBuffer); } private void glControlWindow_Paint(object sender, PaintEventArgs e) { GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); float[] eyes = { 0.0f, 0.0f, -10.0f }; float[] target = { 0.0f, 0.0f, 0.0f }; Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); //45 degree = 0.785398163 rads Matrix4 view = Matrix4.LookAt(eyes[0], eyes[1], eyes[2], target[0], target[1], target[2], 0, 1, 0); Matrix4 model = Matrix4.Identity; Matrix4 MV = view * model; //First Clear Buffers GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); GL.LoadMatrix(ref MV); GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height); GL.Enable(EnableCap.DepthTest); //Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less); //Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview); GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); //Finally... GraphicsContext.CurrentContext.VSync = true; //Caps frame rate as to not over run GPU glControlWindow.SwapBuffers(); //Takes from the 'GL' and puts into control } private void DrawTriangle() { GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.DrawArrays(BeginMode.Triangles, 0, 3); GL.DisableClientState(ArrayCap.VertexArray); } //mouse click event implementation private void glControlWindow_MouseClick(object sender, System.Windows.Forms.MouseEventArgs e) { //Enter Select mode. Pretend drawing. GL.RenderMode(RenderingMode.Select); int[] viewport = new int[4]; GL.GetInteger(GetPName.Viewport, viewport); GL.PushMatrix(); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GluPickMatrix(e.X, e.Y, 5, 5, viewport); Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); // this projection matrix is the same as one in glControlWindow_Paint method. GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); int i = 0; int hits; GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); i++; GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); hits = GL.RenderMode(RenderingMode.Render); .....hits processing code goes here... GL.PopMatrix(); glControlWindow.Invalidate(); } I expect to get only one hit everytime i click inside a triangle, but i always get 2 no matter where i click. I suspect there is something wrong with the implementation of the GluPickMatrix, I haven't figured out yet.

    Read the article

  • How to calculate continuous motion with angular velocity in 2d

    - by Rulk
    I'm really new with physics. Maybe someone would be able to help me to solve the next problem: I need to calculate position of an agent on the plane(2D) in next time step where time step is large(20+ seconds) What I know about agent's motion: Initial Position Direction(normalised vector) Velocity(linear function from time ) - object always moves along it's direction Angular Velocity(linear function from time) Optional: External force direction External force (linear function from time) Running discreet simulation with t-0 is not an option.

    Read the article

  • How exactly does app ranking work?

    - by qweasdzxc1
    So I've been in the app industry for around half a year and I still don't know how exactly ranking higher for your app will help increase downloads. That sounds like a question with an obvious answer but this is what's going through my mind so hear me out: Unless your app is ranked within the top 100, no one can see it in the featured categories. So even if my app jumped from 400th to 300th place, would there really even be a difference in downloads? And I'm saying 400th to 300th in ranking in my specific category. Indie developers like me don't even come close to ranking for the overall category. So far, the only usefulness of trying to get a higher rank is to get featured or something like that, but big companies have tons of money to throw on marketing...so the chances of any indie developer getting featured is rare. The only thing that I can see ranking being good for is to rank for your keywords so that when someone searches for that word, your app will hopefully appear in the top 10-25 results. Can anyone confirm my thoughts or add anything else that I might have missed out on? How exactly do users find your app if you're not in the top 100 app in your category?

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >