Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 376/772 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • How to rotate a set of points on z = 0 plane in 3-D, preserving pairwise distances?

    - by cagirici
    I have a set of points double n[] on the plane z = 0. And I have another set of points double[] m on the plane ax + by + cz + d = 0. Length of n is equal to length of m. Also, euclidean distance between n[i] and n[j] is equal to euclidean distance between m[i] and m[j]. I want to rotate n[] in 3-D, such that for all i, n[i] = m[i] would be true. In other words, I want to turn a plane into another plane, preserving the pairwise distances. Here's my code in java. But it does not help so much: double[] rotate(double[] point, double[] currentEquation, double[] targetEquation) { double[] currentNormal = new double[]{currentEquation[0], currentEquation[1], currentEquation[2]}; double[] targetNormal = new double[]{targetEquation[0], targetEquation[1], targetEquation[2]}; targetNormal = normalize(targetNormal); double angle = angleBetween(currentNormal, targetNormal); double[] axis = cross(targetNormal, currentNormal); double[][] R = getRotationMatrix(axis, angle); return rotated; } double[][] getRotationMatrix(double[] axis, double angle) { axis = normalize(axis); double cA = (float)Math.cos(angle); double sA = (float)Math.sin(angle); Matrix I = Matrix.identity(3, 3); Matrix a = new Matrix(axis, 3); Matrix aT = a.transpose(); Matrix a2 = a.times(aT); double[][] B = { {0, axis[2], -1*axis[1]}, {-1*axis[2], 0, axis[0]}, {axis[1], -1*axis[0], 0} }; Matrix A = new Matrix(B); Matrix R = I.minus(a2); R = R.times(cA); R = R.plus(a2); R = R.plus(A.times(sA)); return R.getArray(); } This is what I get. The point set on the right side is actually part of a point set on the left side. But they are on another plane. Here's a 2-D representation of what I try to do: There are two lines. The line on the bottom is the line I have. The line on the top is the target line. The distances are preserved (a, b and c). Edit: I have tried both methods written in answers. They both fail (I guess). Method of Martijn Courteaux public static double[][] getRotationMatrix(double[] v0, double[] v1, double[] v2, double[] u0, double[] u1, double[] u2) { RealMatrix M1 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*v0[0]}, {0,1,0,-1*v0[1]}, {0,0,1,0}, {0,0,0,1} }); RealMatrix M2 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*u0[0]}, {0,1,0,-1*u0[1]}, {0,0,1,-1*u0[2]}, {0,0,0,1} }); Vector3D imX = new Vector3D((v0[1] - v1[1])*(u2[0] - u0[0]) - (v0[1] - v2[1])*(u1[0] - u0[0]), (v0[1] - v1[1])*(u2[1] - u0[1]) - (v0[1] - v2[1])*(u1[1] - u0[1]), (v0[1] - v1[1])*(u2[2] - u0[2]) - (v0[1] - v2[1])*(u1[2] - u0[2]) ).scalarMultiply(1/((v0[0]*v1[1])-(v0[0]*v2[1])-(v1[0]*v0[1])+(v1[0]*v2[1])+(v2[0]*v0[1])-(v2[0]*v1[1]))); Vector3D imZ = new Vector3D(findEquation(u0, u1, u2)); Vector3D imY = Vector3D.crossProduct(imZ, imX); double[] imXn = imX.normalize().toArray(); double[] imYn = imY.normalize().toArray(); double[] imZn = imZ.normalize().toArray(); RealMatrix M = new Array2DRowRealMatrix(new double[][]{ {imXn[0], imXn[1], imXn[2], 0}, {imYn[0], imYn[1], imYn[2], 0}, {imZn[0], imZn[1], imZn[2], 0}, {0, 0, 0, 1} }); RealMatrix rotationMatrix = MatrixUtils.inverse(M2).multiply(M).multiply(M1); return rotationMatrix.getData(); } Method of Sam Hocevar static double[][] makeMatrix(double[] p1, double[] p2, double[] p3) { double[] v1 = normalize(difference(p2,p1)); double[] v2 = normalize(cross(difference(p3,p1), difference(p2,p1))); double[] v3 = cross(v1, v2); double[][] M = { { v1[0], v2[0], v3[0], p1[0] }, { v1[1], v2[1], v3[1], p1[1] }, { v1[2], v2[2], v3[2], p1[2] }, { 0.0, 0.0, 0.0, 1.0 } }; return M; } static double[][] createTransform(double[] A, double[] B, double[] C, double[] P, double[] Q, double[] R) { RealMatrix c = new Array2DRowRealMatrix(makeMatrix(A,B,C)); RealMatrix t = new Array2DRowRealMatrix(makeMatrix(P,Q,R)); return MatrixUtils.inverse(c).multiply(t).getData(); } The blue points are the calculated points. The black lines indicate the offset from the real position.

    Read the article

  • View matrix in opengl

    - by user5584
    Hi! Sorry for my clumsy question. But I don't know where I am wrong at creating view matrix. I have the following code: createMatrix(vec4f(xAxis.x, xAxis.y, xAxis.z, dot(xAxis,eye)), vec4f( yAxis.x_, yAxis.y_, yAxis.z_, dot(yAxis,eye)), vec4f(-zAxis.x_, -zAxis.y_, -zAxis.z_, -dot(zAxis,eye)), vec4f(0, 0, 0, 1)); //column1, column2,... I have tried to transpose it, but with no success. I have also tried to use gluLookAt(...) with success. At the reference page, I watched the remarks about the to-be-created matrix, and it seems the same as mine. Where I am wrong?

    Read the article

  • Perpendicularity of a normal and a velocity?

    - by Milo
    I'm trying to fake angular velocity on my vehicle when it hits a wall by getting the dot product of the normal of the edge the car is hitting and the vehicle's velocity: Vector2D normVel = new Vector2D(); normVel.equals(vehicle.getVelocity()); normVel.normalize(); float dot = normVel.dot(outNorm); dot = -dot; vehicle.setAngularVelocity(vehicle.getAngularVelocity() + (dot * vehicle.getVelocity().length() * 0.01f)); outNorm is the normal of the wall. The problem is it only works half the time. It seems no matter what, the car always goes clockwise. If the car should head clockwise: -------------------------------------- / / I want the angular velocity to be positive, otherwise if it needs to go CCW: -------------------------------------- \ \ Then the angular velocity should be negative... What should I change to achieve this? Thanks Hmmm... Im not sure why this is not working... for(int i = 0; i < buildings.size(); ++i) { e = buildings.get(i); ArrayList<Vector2D> colPts = vehicle.getRect().getCollsionPoints(e.getRect()); float dist = OBB2D.collisionResponse(vehicle.getRect(), e.getRect(), outNorm); for(int u = 0; u < colPts.size(); ++u) { Vector2D p = colPts.get(u).subtract(vehicle.getRect().getCenter()); vehicle.setTorque(vehicle.getTorque() + p.cross(outNorm)); }

    Read the article

  • Cocos2dx InApp Purchase for ios

    - by Ahmad dar
    I am trying to integrate In App Purchases in my app made by using cocos2d x c++. I am using easyNdk Helper for In App Purchases. My In App Purchases works perfectly for my Objective C apps. But for cocos2d x it is throwing error for the following line if ([[RageIAPHelper sharedInstance] productPurchased:productP.productIdentifier]) Actually value came from CPP file perfectly in form of arguments and Properly shows their value in NSLog , But it always shows the objects as nil even objetcs print their stored value in NSLog also @try catch condition is not working and finally throw the following error Please Help me what i have to do ? Thanks

    Read the article

  • Generating Normal map from a Image with a given Albedo map

    - by snape
    I am working on a research problem part of which involves generating normal map from a given image of a rusted object. I searched the internet for techniques to achieve the above and apparently crazybump is mentioned a lot. I tried it but it didn't produce the desirable effects. Also I am looking for a method which draws inspiration from an existing research paper not some closed source software. I turned my attention to the technique described in the this paper. Results from this technique are satisfactory for normal objects because of bias in the training data but it doesn't work very well in the case of rusted objects. After this I focussed my attention on generating Albedo map (the above problem would become more solvable if Albedo map is obtained). Fortunately I am able to generate pretty good albedo maps for images of rusted objects. I used this paper's approach to generate Albedo maps. Now I want to know a good technique to get Normal map given an image and it's corresponding Albedo map. To give you an idea of what kind of images I am working with I am attaching a sample. Links to research material would be really appreciated. Thanks!

    Read the article

  • How can I place a ProgressBar in Android using Cocos 2d?

    - by Laxmipriya
    I want to place a horizontal progress bar in my Android application and I want to change its progress color. I used the following code, but the progress bar is not being displayed. CCProgressTimer progressBar = CCProgressTimer.progress("progressbar.png"); progressBar.setType(kCCProgressTimerTypeHorizontalBarLR); progressBar.setScale(5); progressBar.setAnchorPoint(CGPoint.ccp(0, 0)); progressBar.setPosition(CGPoint.ccp(0,0)); addChild(progressBar);

    Read the article

  • Ledge grab and climb in Unity3D

    - by BallzOfSteel
    I just started on a new project. In this project one of the main gameplay mechanics is that you can grab a ledge on certain points in a level and hang on to it. Now my question, since I've been wrestling with this for quite a while now. How could I actually implement this? I have tried it with animations, but it's just really ugly since the player will snap to a certain point where the animation starts.

    Read the article

  • Which K-factor should be used in case players have different K-factor values

    - by DmitryN
    I am implementing a ranking system based on the Elo rating and cannot get a point about the K-factor. If two players with different skills and, therefore, different K-factors are playing, which exactly K-factor should be used when changing their ratings? For example, player A has rating 2500 and K-factor 16 (probability ~75%) while player B has rating 2300 and K-factor 24 (probability ~25%). If player A wins, do I need to use 16 as K-factor for both players or 16 for player A and 24 for player B?

    Read the article

  • Drawing beam effect in UDK?

    - by sgrif
    I'm having trouble drawing a particle effect between two actors in UDK - Both the source and the target are not static objects, so as far as I can tell I need to do it in the code not in kismet. Here's what I've got at the moment and it seems to not be doing anything at all. Ideas? BeamEmitter[0] = new(self) class'UTParticleSystemComponent'; BeamEmitter[0].SetAbsolute(false, false, false); BeamEmitter[0].SetTemplate(BeamTemplate[0]); BeamEmitter[0].SetTickGroup(TG_PostUpdateWork); BeamEmitter[0].bUpdateComponentInTick = true; self.AttachComponent(BeamEmitter[0]); BeamEmitter[0].SetBeamEndPoint(2, tarPos); BeamEmitter[0].ActivateSystem();

    Read the article

  • Where are all the tutorials for libGDX?

    - by BluFire
    I've searched online for help and tutorials on LibGDX but I couldn't really find any, except and the wiki for asking questions on stackexchange. Besides the source (demos) and wiki, is there any other tutorials online that's hidden or indirect? From what I read, there isn't much documentation for LibGDX, so there's only two options I see Give up move to a different framework. Ask people a lot of questions.

    Read the article

  • Keyboard input system handling

    - by The Communist Duck
    Note: I have to poll, rather than do callbacks because of API limitations (SFML). I also apologize for the lack of a 'decent' title. I think I have two questions here; how to register the input I'm receiving, and what to do with it. Handling Input I'm talking about after the fact you've registered that the 'A' key has been pressed, for example, and how to do it from there. I've seen an array of the whole keyboard, something like: bool keyboard[256]; //And each input loop check the state of every key on the keyboard But this seems inefficient. Not only are you coupling the key 'A' to 'player moving left', for example, but it checks every key, 30-60 times a second. I then tried another system which just looked for keys it wanted. std::map< unsigned char, Key keyMap; //Key stores the keycode, and whether it's been pressed. Then, I declare a load of const unsigned char called 'Quit' or 'PlayerLeft'. input-BindKey(Keys::PlayerLeft, KeyCode::A); //so now you can check if PlayerLeft, rather than if A. However, the problem with this is I cannot now type a name, for example, without having to bind every single key. Then, I have the second problem, which I cannot really think of a good solution for: Sending Input I now know that the A key has been pressed or that playerLeft is true. But how do I go from here? I thought about just checking if(input-IsKeyDown(Key::PlayerLeft) { player.MoveLeft(); } This couples the input greatly to the entities, and I find it rather messy. I'd prefer the player to handle its own movement when it gets updated. I thought some kind of event system could work, but I do not know how to go with it. (I heard signals and slots was good for this kind of work, but it's apparently very slow and I cannot see how it'd fit). Thanks.

    Read the article

  • C# wpf helix scale based mesh parenting using Transform3DGroup

    - by Rick2047
    I am using https://helixtoolkit.codeplex.com/ as a 3D framework. I want to move black mesh relative to the green mesh as shown in the attached image below. I want to make green mesh parent to the black mesh as the change in scale of the green mesh also will result in motion of the black mesh. It could be partial parenting or may be more. I need 3D rotation and 3D transition + transition along green mesh's length axis for the black mesh relative to the green mesh itself. Suppose a variable green_mesh_scale causing scale for the green mesh along its length axis. The black mesh will use that variable in order to move along green mesh's length axis. How to go about it. I've done as follows: GeometryModel3D GreenMesh, BlackMesh; ... double green_mesh_scale = e.NewValue; Transform3DGroup forGreen = new Transform3DGroup(); Transform3DGroup forBlack = new Transform3DGroup(); forGreen.Children.Add(new ScaleTransform3D(new Vector3D(1, green_mesh_scale , 1))); // ... transforms for rotation n transition GreenMesh.Transform = forGreen ; forBlack = forGreen; forBlack.Children.Add(new TranslateTransform3D(new Vector3D(0, green_mesh_scale, 0))); BlackMesh.Transform = forBlack; The problem with this is the scale transform will also be applied to the black mesh. I think i just need to avoid the scale part. I tried keeping all the transforms but scale, on another Transform3DGroup variable but that also not behaving as expected. Can MatrixTransform3D be used here some how? Also please suggest if this question can be posted somewhere else in stackexchange.

    Read the article

  • Why do my LWJGL fonts have dots and lines around them?

    - by Jordan
    When we render fonts there are weird dots and lines around the text. I have no idea why this would happen. Here is an image of what it looks like: Our font class looks like this: package me.NJ.ComputerTycoon.Font; import me.NJ.ComputerTycoon.BaseObjects.UDim2; import org.lwjgl.opengl.Display; import org.newdawn.slick.Color; import org.newdawn.slick.TrueTypeFont; public class Font { public TrueTypeFont font; private int fontSize = 18; private String fontName = "Calibri"; private int fontStyle = java.awt.Font.BOLD; public Font(String fontName, int fontStyle, int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); //font. } public Font(int fontStyle, int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public Font(int fontSize) { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public Font() { font = new TrueTypeFont(new java.awt.Font(fontName, fontStyle, fontSize), true); } public void drawString(int x, int y, String s, Color color){ this.font.drawString(x, y, s, color); } public void drawString(int x, int y, String s){ this.font.drawString(x, y, s); } public void drawString(float x, float y, String s, Color color){ this.font.drawString(x, y, s, color); } public void drawString(float x, float y, String s){ this.font.drawString(x, y, s); } public void drawString(UDim2 udim, String s){ this.font.drawString((Display.getWidth() * udim.getX().getScale()) + udim.getX().getOffset(), (Display.getHeight() * udim.getY().getScale()) + udim.getY().getOffset(), s); } public String getFontName(){ return this.fontName; } public int getFontSize(){ return this.fontSize; } public TrueTypeFont getFont(){ return this.font; } } What could be causing this?

    Read the article

  • Developing a long pannable, sprite-animated Windows Store app

    - by Groo
    I am creating my first Windows Store app in XAML, and I cannot seem to find a proper example for the requirements I have (I have spent a couple of days fiddling around, so I apologize if I missed something obvious). Basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • Why is a fully transparent pixel still rendered?

    - by Mr Bell
    I am trying to make a pixel shader that achieves an effect similar to this video http://www.youtube.com/watch?v=f1uZvurrhig&feature=related My basic idea is render the scene to a temp render target then Render the previously rendered image with a slight fade on to another temp render target Draw the current scene on top of that Draw the results on to a render target that persists between draws Draw the results on to the screen But I am having problems with the fading portion. If I have my pixel shader return a color with its A component set to 0, shouldn't that basically amount to drawing nothing? (Assuming that sprite batch blend mode is set to AlphaBlend) To test this I have my pixel shader return a transparent red color. Instead of nothing being drawn, it draws a partially transparent red box. I hope that my question makes sense, but if it doesnt please ask me to clarify Here is the drawing code public override void Draw(GameTime gameTime) { GraphicsDevice.SamplerStates[1] = SamplerState.PointWrap; drawImageOnClearedRenderTarget(presentationTarget, tempRenderTarget, fadeEffect); drawImageOnRenderTarget(sceneRenderTarget, tempRenderTarget); drawImageOnClearedRenderTarget(tempRenderTarget, presentationTarget); GraphicsDevice.SetRenderTarget(null); drawImage(backgroundTexture); drawImage(presentationTarget); base.Draw(gameTime); } private void drawImage(Texture2D image, Effect effect = null) { spriteBatch.Begin(0, BlendState.AlphaBlend, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(image, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); } private void drawImageOnRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); drawImage(image, effect); } private void drawImageOnClearedRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); GraphicsDevice.Clear(Color.Transparent); drawImage(image, effect); } Here is the fade pixel shader sampler TextureSampler : register(s0); float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0 { float4 c = 0; c = tex2D(TextureSampler, texCoord); //c.a = clamp(c.a - 0.05, 0, 1); c.r = 1; c.g = 0; c.b = 0; c.a = 0; return c; } technique Fade { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • Will I have an easier time learning OpenGL in Pygame or Pyglet? (NeHe tutorials downloaded)

    - by shadowprotocol
    I'm looking between PyGame and Pyglet, Pyglet seems to be somewhat newer and more Pythony, but it's last release according to Wikipedia is January '10. PyGame seems to have more documentation, more recent updates, and more published books/tutorials on the web for learning. I downloaded both the Pyglet and PyGame versions of the NeHe OpenGL tutorials (Lessons 1-10) which cover this material: lesson01 - Setting up the window lesson02 - Polygons lesson03 - Adding color lesson04 - Rotation lesson05 - 3D lesson06 - Textures lesson07 - Filters, Lighting, input lesson08 - Blending (transparency) lesson09 - 2D Sprites in 3D lesson10 - Moving in a 3D world What do you guys think? Is my hunch that I'll be better off working with PyGame somewhat warranted?

    Read the article

  • Sprite and Physics components or sub-components?

    - by ashes999
    I'm taking my first dive into creating a very simple entity framework. The key concepts (classes) are: Entity (has 0+ components, can return components by type) SpriteEntity (everything you need to draw on screen, including lighting info) PhysicsEntity (velocity, acceleration, collision detection) I started out with physics notions in my sprite component, and then later removed them to a sub-component. The separation of concerns makes sense; a sprite is enough information to draw anything (X, Y, width, height, lighting, etc.) and physics piggybacks (uses the parent sprite to get X/Y/W/H) while adding physics notions of velocity and collisions. The problem is that I would like collisions to be on an entity level -- meaning "no matter what your representation is (be it sprites, text, or something else), collide against this entity." So I refactored and redirected collision handling from entities to sprite.physics, while mapping and returning the right entity on physics collisions. The problem is that writing code like this.GetComponent<SpriteComponent>().physics is a violation of abstraction. Which made me think (this is the TLDR): should I keep physics as a separate component from sprites, or a sub-component, or something else? How should I share data and separate concerns?

    Read the article

  • How can I make an object's hitbox rotate with its texture?

    - by Matthew Optional Meehan
    In XNA, when you have a rectangular sprite that doesnt rotate, it's easy to get its four corners to make a hitbox. However, when you do a rotation, the points get moved and I assume there is some kind of math that I can use to aquire them. I am using the four points to draw a rectangle that visually represents the hitboxes. I have seen some per-pixel collision examples, but I can forsee they would be hard to draw a box/'convex hull' around. I have also seen physics like farseer but I'm not sure if there is a quick tutorial to do what I want.

    Read the article

  • per pixel based collision detection.

    - by pengume
    I was wondering if anyone had any ideas on how to get per pixel collision detection for the android. I saw that the andEngine has great collision detection on rotation as well but couldn't see where the actual detection happened per pixel. Also noticed a couple solutions for java, could these be replicated for use with the Android SDK? Maybe someone here has a clean piece of code I could look at to help understand what is going on with per pixel detection and also why when rotating it is a different process.

    Read the article

  • AABB Sweeping, algorithm to solve "stacking box" problem

    - by Ivo Wetzel
    I'm currently working on a simple AABB collision system and after some fiddling the sweeping of a single box vs. another and the calculation of the response velocity needed to push them apart works flawlessly. Now on to the new problem, imagine I'm having a stack of boxes which are falling towards a ground box which isn't moving: Each of these boxes has a vertical velocity for the "gravity" value, let's say this velocity is 5. Now, the result is that they all fall into each other: The reason is obvious, since all the boxes have a downward velocity of 5, this results in no collisions when calculating the relative velocity between the boxes during sweeping. Note: The red ground box here is static (always 0 velocity, can utilize spatial partitioning ), and all dynamic static collisions are resolved first, thus the fact that the boxes stop correctly at this ground box. So, this seems to be simply an issue with the order the boxes are sweept against each other. I imagine that sorting the boxes based on their x and y velocities and then sweeping these groups correctly against each other may resolve this issues. So, I'm looking for algorithms / examples on how to implement such a system. The code can be found here: https://github.com/BonsaiDen/aabb The two files which are of interest are [box/Dynamic.lua][3] and [box/Manager.lua][4]. The project is using Love2D in case you want to run it.

    Read the article

  • How can I author objects with perspective that fit into a tile-based map but span multiple tiles?

    - by Growler
    I'm creating a tilemap city and trying to figure out the most efficient way to create unique building scenes. The trick is, I need to maintain a sort of 2D, almost-top-down perspective, which is hard to do with buildings or large objects that span multiple tiles. I've tried doing three buildings at a time, and mixing and matching the base layer and colors, like this: This creates a weird overlapping effect, and also doesn't seem that efficient from a production standpoint. But it was the best way to have shadows appear correctly on the neighboring buildings. I'm wondering if modular buildings would be the way to go? That way I can mix and match any set of buildings together as tiles: I guess I would have to risk some perspective and shadowing to get the buildings to align correctly. What sort of authoring process could I use to allow me to create a variety of buildings (or other objects) that maintain this perspective while spanning multiple tiles worth of screen space? Would you recommend creating blank buildings, and then affixing art overlays as necessary to make the buildings unique? Or should they be directly part of the building tile (for example, create a separate tileset of buildings signs and colorings)?

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • Sprites are sometimes blurry in Flash

    - by Tim Cooper
    I am playing around with drawing an SVG sprite (imported in through [Embed]). Depending on the coordinates of the image, sometimes it appears more crisp than others. The following image shows how at different locations is it rendered differently: (Image link - You may have to download and zoom in with an image editor to see it) You'll notice that the middle sprite is more blurry than the ones on the sides. Does anyone know why this is? Any help would be appreciated.

    Read the article

  • Refactoring an immediate drawing function into VBO, access violation error

    - by Alex
    I have a MD2 model loader, I am trying to substitute its immediate drawing function with a Vertex Buffer Object one.... I am getting a really annoying access violation reading error and I can't figure out why, but mostly I'd like an opinion as to whether this looks correct (never used VBOs before). This is the original function (that compiles ok) which calculates the keyframe and draws at the same time: glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); What I'd like to do is to calculate all positions before hand, store them in a Vertex array and then draw them. This is what I am trying to replace it with (in the exact same part of the program) int vCount = 0; for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } indices[vCount] = normal[0]; vCount++; indices[vCount] = normal[1]; vCount++; indices[vCount] = normal[2]; vCount++; MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; indices[vCount] = texCoord->texCoordX; vCount++; indices[vCount] = texCoord->texCoordY; vCount++; indices[vCount] = pos[0]; vCount++; indices[vCount] = pos[1]; vCount++; indices[vCount] = pos[2]; vCount++; } } totalVertices = vCount; glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glNormalPointer(GL_FLOAT, 0, indices); glTexCoordPointer(2, GL_FLOAT, sizeof(float)*3, indices); glVertexPointer(3, GL_FLOAT, sizeof(float)*5, indices); glDrawElements(GL_TRIANGLES, totalVertices, GL_UNSIGNED_BYTE, indices); glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays glEnableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); First of all, does it look right? Second, I get access violation error "Unhandled exception at 0x01455626 in Graphics_template_1.exe: 0xC0000005: Access violation reading location 0xed5243c0" pointing at line 7 Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; where the two Vs seems to have no value in the debugger.... Till this point the function behaves in exactly the same way as the one above, I don't understand why this happens? Thanks for any help you may be able to provide!

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >