Search Results

Search found 2146 results on 86 pages for 'camera calibration'.

Page 34/86 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • Transform coordinates from 3d to 2d without matrix or built in methods

    - by Thomas
    Not to long ago i started to create a small 3D engine in javascript to combine this with an html5 canvas. One of the issues I run into is how can you transform 3d to 2d coords. Since I cannot use matrices or built in transformation methods I need another way. I've tried implementing the next explanation + pseudo code: http://freespace.virgin.net/hugo.elias/routines/3d_to_2d.htm Unfortunately no luck there. I've replace all the input variables with data from my own camera and object classes. I have the following data: An object with a rotation, position vector and an array of 4 3d coords (its just a plane) a camera with a position and rotation vector the viewport - a square 600 x 600 surface. The example uses a zoom factor which I've set as 1 Most hits on google use either matrix calculations or don't implement camera rotation. Basic transformation should be like this: screen.x = x / z * zoom screen.y = y / z * zoom Can anyone point me in the right direction or explain to me howto achieve this? edit: Thanks for all your posts, I haven't been able to apply all this to my project yet but I hope to do this soon.

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Extracting Frustum Planes (Hartmann & Gribbs method)

    - by DAVco
    I have been grappling with the Hartmann/Gribbs method of extracting the Frustum planes for some time now, with little success. There doesn't appear to be a "definitive" topic or tutorial which combines all the necessary information, so perhaps this can be it First of all, I am attempting to do this in C# (For Playstation Mobile), using OpenGL style Column-Major matrices in a Right-Handed coordinate system but obviously the math will work in any language. My projection matrix has a Near plane at 1.0, Far plane at 1000, FOV of 45.0 and Aspect of 1.7647. I want to get my planes in World-Space, so I build my frustum from the View-Projection Matrix (that's projectionMatrix * viewMatrix). The view Matrix is the inverse of the camera's World-Transform. The problem is; regardless of what I tweak, I can't seem to get a correct frustum. I think that I may be missing something obvious. Focusing on the Near and Far planes for the moment (since they have the most obvious normals when correct), when my camera is positioned looking down the negative z-axis, I get two planes facing in the same direction, rather than opposite directions. If i strafe my camera left and right (while still looking along the z axis) the x value of the normal vector changes. Obviously, something is fundamentally wrong here; I just can't figure out what - maybe someone here can?

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • Failing Screen Resize Method

    - by StrongJoshua
    So I want my game to draw to a specific "optimal" size and then be stretched to fit screens that are a different size. I'm using LibGDX and figured that I could just draw everything to a FrameBuffer and then resize that buffer to the appropriate size when drawing it to the actual display. However, my method does not work, it just results in a black screen with the top right quarter of the screen white.Intermediary is the FBO, interMatrix is a Matrix4 object, and camera is an OrthographicCamera. @Override public void render() { // update actors currentStage.act(); //render to intermediary buffer batch.setProjectionMatrix(interMatrix); intermediary.begin(); batch.begin(); currentStage.draw(); batch.flush(); intermediary.end(); //resize to actual width and height Sprite s = new Sprite(intermediary.getColorBufferTexture()); s.flip(true, false); batch.setProjectionMatrix(camera.combined); batch.draw(s.getTexture(), 0, 0, width, height); batch.end(); } These are the constructors for the above mentioned objects (GAME_WIDTH and HEIGHT are the "optimal" settings, width and height are the actual sizes, which are the same when running on desktop). intermediary = new FrameBuffer(Format.RGBA8888, GAME_WIDTH, GAME_HEIGHT, false); interMatrix = new Matrix4(); camera = new OrthographicCamera(width, height); interMatrix.setToOrtho2D(0, 0, GAME_WIDTH, GAME_HEIGHT); Is there a better way of doing this or can is this a viable option and how do I fix what I have?

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • How to rotate a group of objects around a common center?

    - by user1662292
    I've made a model in 3D Studio Max 9. It consists of a variety of cubes, clyinders etc. In XNA I've imported the model okay and it shows correctly. However, when I apply rotation, each component in the model rotates around it's own centre. I want the model to rotate as a single unit. I've linked the components in 3D Max and they rotate as I want in Max. protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); model = Content.Load<Model>("Models/Alien1"); } protected override void Update(GameTime gameTime) { camera.Update(1f, new Vector3(), graphics.GraphicsDevice.Viewport.AspectRatio); rotation += 0.1f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(transforms); Matrix worldMatrix = Matrix.Identity; Matrix rotationYMatrix = Matrix.CreateRotationY(rotation); Matrix translateMatrix = Matrix.CreateTranslation(location); worldMatrix = rotationYMatrix * translateMatrix; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = worldMatrix * transforms[mesh.ParentBone.Index]; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; } mesh.Draw(); } base.Draw(gameTime); } More Info: Rotating the object via it's properties works fine so I'm guessing there's something up with the code rather than with the object itself. Translating the object also causes the objects to get moved independently of each other rather than as a single model and each piece becomes spread around the scene. The model is in .X format.

    Read the article

  • I've got my 2D/3D conversion working perfectly, how to do perspective

    - by user346992
    Although the context of this question is about making a 2d/3d game, the problem i have boils down to some math. Although its a 2.5D world, lets pretend its just 2d for this question. // xa: x-accent, the x coordinate of the projection // mapP: a coordinate on a map which need to be projected // _Dist_ values are constants for the projection, choosing them correctly will result in i.e. an isometric projection xa = mapP.x * xDistX + mapP.y * xDistY; ya = mapP.x * yDistX + mapP.y * yDistY; xDistX and yDistX determine the angle of the x-axis, and xDistY and yDistY determine the angle of the y-axis on the projection (and also the size of the grid, but lets assume this is 1-pixel for simplicity). x-axis-angle = atan(yDistX/xDistX) y-axis-angle = atan(yDistY/yDistY) a "normal" coordinate system like this --------------- x | | | | | y has values like this: xDistX = 1; yDistX = 0; xDistY = 0; YDistY = 1; So every step in x direction will result on the projection to 1 pixel to the right end 0 pixels down. Every step in the y direction of the projection will result in 0 steps to the right and 1 pixel down. When choosing the correct xDistX, yDistX, xDistY, yDistY, you can project any trimetric or dimetric system (which is why i chose this). So far so good, when this is drawn everything turns out okay. If "my system" and mindset are clear, lets move on to perspective. I wanted to add some perspective to this grid so i added some extra's like this: camera = new MapPoint(60, 60); dx = mapP.x - camera.x; // delta x dy = mapP.y - camera.y; // delta y dist = Math.sqrt(dx * dx + dy * dy); // dist is the distance to the camera, Pythagoras etc.. all objects must be in front of the camera fac = 1 - dist / 100; // this formula determines the amount of perspective xa = fac * (mapP.x * xDistX + mapP.y * xDistY) ; ya = fac * (mapP.x * yDistX + mapP.y * yDistY ); Now the real hard part... what if you got a (xa,ya) point on the projection and want to calculate the original point (x,y). For the first case (without perspective) i did find the inverse function, but how can this be done for the formula with the perspective. May math skills are not quite up to the challenge to solve this. ( I vaguely remember from a long time ago mathematica could create inverse function for some special cases... could it solve this problem? Could someone maybe try?)

    Read the article

  • 5 reason why you should upgrade to new iPad (3rd generation)

    - by Gopinath
    Apple released the new iPad, 3rd generation, couple of days ago and they will be available in stores from March 16 onwards.  It’s the best tablet available in the market and for first time buyers it’s a no brainer to choose it. What about the iPad owners? Should they upgrade their iPad 2 to the new iPad? This is the question on the lips of most of the iPad owners. In this post we will provide you 5 reasons why you should upgrade your iPad, if more than two reasons are convincing then you should upgrade to the new iPad. Retina display – The best display ever made for mobile device, a game changer The new iPad comes with Retina display with screen resolution of 2048 x 1536, which is twice the resolution of iPad 2. Undoubtedly the iPad 3’s display is the best display ever made for a mobile device and it’s a game changer. With better resolution on iPad 3 eBook reading is going to be a pleasure with clear and crisp text Watching HD movies on iPad is going to be unbelievably good The new Games targeted for Retina display are going to be more realistic and needless to explain the pleasure of playing such games Graphic artists and photo editors get a professional on screen rendering support to create beautiful graphics 2x Faster & 2x Memory – Better Games and powerful Apps The new iPad is more powerful with 2x faster graphics and 2x more memory. Apple claims that the A5x processor of new iPad is 2x faster than iPad 2 and 4x faster than the best graphic chips available from other vendors. The RAM of  new iPad  is upgraded to 1 GB compared from 512 MB of iPad 2. With the fast processor and more memory, Apps and games are going to be blazing fast. 4G Internet – Browse the web at the speeds of 42 MB/sec Half of the iPad owners are frequent commuters who access internet over cellular networks, the new iPad’s 4G LTE is going to be a big boom for their  high data access needs. With the new iPad’s 4G LTE connectivity you can browse the web at 42 MB/sec and it mean you can watch a HD video without buffering issues. iPad 2 comes with 3G network support and it’s browsing speeds are way less than the new iPad. 5MP Camera – HD Movie Recording & gorgeous Photography iPad 2 has a 0.7 mega pixel camera and the new iPad comes with 5 megapixels camera. That is a huge boost for hobbyist  photographers and videographers. With the new iPad you can shoot gorgeous photos and 1080p HD video. The iSight camera of new iPad uses advanced optics with features like auto exposure, auto focus and face detection up to 10 faces. Amazon Pays up to $300 for old iPad 2 16 GB Wifi and more for other models Do you know that you can trade in your iPad 2 16 GB Wifi for upto $300? Amazon has an excellent trade in program for selling your used iPad 2s. Depending on the condition of the iPad 2  Amazon offers $234, $270, $300.00 for 16 GB Wifi versions that in Acceptable, Good and Like New conditions respectively.  The higher models of iPad 2s fetch you more money. With this great deal from Amazon the amount of extra money you need to spend for new iPad is almost half of their price. Visit Amazon Trade In’s website or read Amazon’s brilliant plan to pay you crazy money for your iPad 2 for more details. Related: New IPad Vs. IPad 2–Side By Side Comparison Of Hardware Specification [Infographic]

    Read the article

  • (libgdx) Button doesn't work

    - by StercoreCode
    At the game I choose StopScreen. At this screen displays button. But if I click it - it doesn't work. What I expect - when I press button it must restart game. At this stage must display at least a message that the button is pressed. I tried to create new and clear project. Main class implement ApplicationListener. I put the same code in the appropriate methods. And it's works! But if i create this button in my game - it doesn't work. When i play and go to the StopScreen, i saw button. But if i click, or touch, nothing happens. I think that the proplem at the InputListener, although i set the stage as InputProcessor. Gdx.input.setInputProcessor(stage); I also try to addListener for Button as ClickListener. But it gave no results. Or it maybe problem that i implements Screen method - not ApplicationListener or Game. But if StopScreen implement ApplicationListener, at the mainGame I can't to setScreen. Just interests question: why button displays but nothing happens to it? Here is the code of StopScreen if it helps find my mistake: public class StopScreen implements Screen{ private OrthographicCamera camera; private SpriteBatch batch; public Stage stage; //** stage holds the Button **// private BitmapFont font; //** same as that used in Tut 7 **// private TextureAtlas buttonsAtlas; //** image of buttons **// private Skin buttonSkin; //** images are used as skins of the button **// public TextButton button; //** the button - the only actor in program **// public StopScreen(CurrusGame currusGame) { camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); batch = new SpriteBatch(); buttonsAtlas = new TextureAtlas("button.pack"); //** button atlas image **// buttonSkin = new Skin(); buttonSkin.addRegions(buttonsAtlas); //** skins for on and off **// font = AssetLoader.font; //** font **// stage = new Stage(); stage.clear(); Gdx.input.setInputProcessor(stage); TextButton.TextButtonStyle style = new TextButton.TextButtonStyle(); style.up = buttonSkin.getDrawable("ButtonOff"); style.down = buttonSkin.getDrawable("ButtonOn"); style.font = font; button = new TextButton("PRESS ME", style); //** Button text and style **// button.setPosition(100, 100); //** Button location **// button.setHeight(100); //** Button Height **// button.setWidth(100); //** Button Width **// button.addListener(new InputListener() { public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Pressed"); return true; } public void touchUp(InputEvent event, float x, float y, int pointer, int button) { Gdx.app.log("my app", "Released"); } }); stage.addActor(button); } @Override public void render(float delta) { Gdx.gl.glClearColor(0, 1, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stage.act(); batch.setProjectionMatrix(camera.combined); batch.begin(); stage.draw(); batch.end(); }

    Read the article

  • My frustum culling is culling from the wrong point

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] /= length; m_Planes[side][1] /= length; m_Planes[side][2] /= length; m_Planes[side][3] /= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } What am i doing wrong?

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Pantech Link II, Ubuntu and Virtual XP

    - by user85041
    Okay this is my problem. I have a Pantech Link II, dmesg states: [ 896.072037] usb 2-3: new high-speed USB device number 3 using ehci_hcd [ 896.258562] cdc_acm 2-3:1.0: ttyACM0: USB ACM device [ 896.260039] usbcore: registered new interface driver cdc_acm [ 896.260042] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters Have it installed through wine (pc suite and driver) and it doesn't see it. Virtual XP through VMWare Player sees my device, knows it needs a driver. The removable devices says Curitel Pantech USB Device (Maybe Driver). I have PC Suite installed in XP, I install the driver through the executable.. it says problem with installing hardware, and then it disappears. Ubuntu sees it after restart, but if I start XP with that driver installed, it disappears from both and I get these errors in dmesg: [ 1047.760555] /dev/vmmon[2882]: PTSC: initialized at 3093322000 Hz using TSC, TSCs are synchronized. [ 1048.174033] /dev/vmmon[2882]: Monitor IPI vector: 0 [ 1055.293060] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293074] /dev/vmnet: port on hub 8 successfully opened [ 1055.293088] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293094] /dev/vmnet: port on hub 8 successfully opened [ 1072.446305] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446316] /dev/vmnet: port on hub 8 successfully opened [ 1072.446328] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446334] /dev/vmnet: port on hub 8 successfully opened [ 1072.856024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.292024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.732024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1127.743034] NET: Registered protocol family 39 [ 1127.749320] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_ALLOC (cid=1522210225,result=4). [ 1144.104031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1144.412031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1155.889976] ehci_hcd 0000:00:13.2: force halt; handshake ffffc90000642024 00004000 00000000 -> -110 [ 1155.889980] ehci_hcd 0000:00:13.2: HC died; cleaning up [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 lessa@X:~$ dmesg|tail [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 I have tried uninstalling, and installing manually from the device manager update driver while it's still has the warning sign.. it doesn't see the drivers as valid. No idea how to fix this.. would prefer to not have to go to another computer. I'm not trying to do anything but get the pictures off of it. I have to restart ubuntu, plug in device, for ubuntu to see it correctly again. I am like a month and a half old linux newbie so I have no idea the commands I could use for this, and I don't have a memory card in the phone to mount.

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • Display object in just a certain viewport

    - by PSilo
    Hi I got 4 viewports and one large that I can switch between now I got an object namely the camera and the cameras target position that I show with rendering a sphere at those locations. I want to show the cameras position in 3 of my viewports but not in the last which is the camera display but at the moment I got an all or nothing scenario. void display(int what) { if(what==5){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); camControll();} if(what==1){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(75,15,-5,0,5,-5,0,1,0);} if(what==2){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0,110,0,0,0,0,1,0,0);} if(what==3){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, float(320) / float(240), 0.1f, 100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); camControll();} if(what==4){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(185,75,25,0,28,0,0,1,0);} //glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); ////gluLookAt(cos(shared.time) * shared.distance, 10, sin(shared.time) * shared.distance, 0, 0, 0, 0, 1, 0); ////ca.orbitYaw(0.05); //ca.lookAt(); glClearColor(0, 0, 0, 1); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); drawScene(); // scene that all views should render drawCamera(); / camera position that only certain views should render glutSwapBuffers(); } I'm thinking that perhaps I could do one sweep for first the 3 viewports and then call glutSwapBuffers() and then do the other viewport without the camera position but some stuttering I previously had was traced to glutSwapBuffers() being called for each viewport so I guess there has to be another way only that I cant figure it out.

    Read the article

  • Jquery Blinking issues when using 2 .hover

    - by user1897502
    I want to do 2 .hover : first when the cursor is hover the image icons should appear on the image second when the cursor is hover a specific icon, (for exemple info) a div with the informations should appear I have nearly sucess but I have blinkin problems and when I use the two .hover function the information popup does not show up. here my html {LinkOpenTag}<div class="centrage"><div class="photoDiv"><img src="{PhotoURL-500}" alt="{PhotoAlt}" /> <div class="icons"> {block:Exif} <span class="info"><span> <div class="exif" style="display: none; opacity: 0"> <ol class="CameraMeta"> <li>{block:Camera}Camera: {Camera}{/block:Camera}</li> <li>{block:Aperture}Aperture: {Aperture}{/block:Aperture}</li> <li>{block:Exposure}Exposure: {Exposure}{/block:Exposure}</li> <li>{block:FocalLength}Focal Length: {FocalLength}{/block:FocalLength}</li> </ol> </div> {/block:Exif} </div> </div>{LinkCloseTag} and here my jquery <script type="text/javascript"> $(".photoDiv img").hover( function() { $(this).next().css("visibility", "visible"); }, function() { $(this).next().css("visibility", "hidden"); } ); $("span.info").hover( function() { $(".exif").css("display", "block"); $(".exif").css("opacity", "1"); }, function() { $(".exif").css("display", "none"); $(".exif").css("opacity", "0"); } ); Thanks for your time :)

    Read the article

  • jQuery ajax post of jpg image to .net webservice. Image results corrupted

    - by sosergio
    I have a phonegap jquery app that opens the camera and take a picture. I then POST this picture to a .net webservice, wich I've coded. I can't use phonegap FileTransfer because such isn't supported by Bada os, wich is a requirement. I believe I've successfully loaded the image from phonegap FileSystem API, I've attached it into an .ajax type:post, I've even received it from .net side, but when .net save the image into the server, the image results corrupted. It seems to me that two sides of the communication have different data type. Has anyone experience in this? Any help will be appreciated. This is my code: //PHONEGAP CAMERA ACCESS (summed up) navigator.camera.getPicture(onGetPictureSuccess, onGetPictureFail, { quality: 50, destinationType:Camera.DestinationType.FILE_URI }); window.resolveLocalFileSystemURI(imageURI, onResolveFileSystemURISuccess, onResolveFileSystemURIError); fileEntry.file(gotFileSuccess, gotFileError); new FileReader().readAsDataURL(file); //UPLOAD FILE function onDataReadSuccess(evt) { var image_data = evt.target.result; var filename = unique_id(); var filext = "jpg"; $.ajax({ type : 'POST', url : SERVICE_BASE_URL+"/fotos/"+filename+"?ext="+filext, cache: false, timeout: 100000, processData: false, data: image_data, contentType: 'image/jpeg', success : function(data) { console.log("Data Uploaded with success. Message: "+ data); $.mobile.hidePageLoadingMsg(); $.mobile.changePage("ok.html"); } }); } On my .net Web Service this is the method that gets invoked: public string FotoSave(string filename, string extension, Stream fileContent) { string filePath = HttpContext.Current.Server.MapPath("~/foto_data/") + "\\" + filename; FileStream writeStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Write); int Length = 256; Byte[] buffer = new Byte[Length]; int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); }

    Read the article

  • Android Bluetooth syncing

    - by Darryl
    I am connecting to a bluetooth enabled camera, and I am able to connect using the methods found in the BluetoothChat example. I need to send commands to the camera. The issue is that I also need to get a response BACK from the camera after I send the command in the first place. So basically I need to write a command and receive a response. However, the thing is that the commands sometimes don't generate a response. Even the documentation on the camera says that you "have to send the sync command as many as 25 times on power up before you will get a response." So I cannot just write a command and wait for a response, as the "read" function blocks the thread. If I have the read function in another thread, like the bluetooth chat example, there seems to be sync issues, i.e., if I issue a write command, how can I know that it is reading if that is happening in another thread? I did set a global variable to check for, but this seems "iffy" at best. So basically I need to write to the bluetooth and then attempt to read from it. However, I need to let that read timeout and if I haven't received a response, I need to write again until I get a response (or until it's tried a set number of times). I don't need the read function to be going all the time in the background. Any ideas? Thanks in advance for your time.

    Read the article

  • How can I use the voice recognition used by Android on Ubuntu?

    - by aking1012
    If I'm developing an Android app that uses TTS and Voice recognition, which libraries are used for the same voice recognition and speech on Ubuntu? I'm assuming espeak for text to speech, but I'm unsure which voice recognition library and dictionary/learning/calibration system is used for voice recognition. I'ld like to make the app available on Ubuntu Desktop. as well as test it outside an emulator

    Read the article

  • iPhone 3GS lens data for Hugin / Panorama Tools

    - by david-ocallaghan
    I'm using the Hugin frontend to Panorama Tools and it requires camera and lens data for the images it handles (details here: http://wiki.panotools.org/Hugin_Camera_and_Lens_tab). Can anyone tell me the appropriate values for the iPhone 3GS camera? Lens: rectilinear Horizontal Field of View: ? focal length: 3.85mm crop factor / focal length multiplier: ?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >