Search Results

Search found 2146 results on 86 pages for 'camera calibration'.

Page 39/86 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • HLSL/XNA Ambient light texture mixed up with multi pass lighting

    - by Manu-EPITA
    I've been having some troubles lately with lighting. I have found a source on google which is working pretty good on the example. However, when I try to implement it to my current project, I am getting some very weird bugs. The main one is that my textures are "mixed up" when I only activate the ambient light, which means that a model gets the texture of another one . I am using the same effect for every meshes of my models. I guess this could be the problem, but I don't really know how to "reset" an effect for a new model. Is it possible? Here is my shader: float4x4 WVP; float4x4 WVP; float3x3 World; float3 Ke; float3 Ka; float3 Kd; float3 Ks; float specularPower; float3 globalAmbient; float3 lightColor; float3 eyePosition; float3 lightDirection; float3 lightPosition; float spotPower; texture2D Texture; sampler2D texSampler = sampler_state { Texture = <Texture>; MinFilter = anisotropic; MagFilter = anisotropic; MipFilter = linear; MaxAnisotropy = 16; }; struct VertexShaderInput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 PositionO: TEXCOORD1; float3 Normal : NORMAL0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WVP); output.Normal = input.Normal; output.PositionO = input.Position.xyz; output.Texture = input.Texture; return output; } float4 PSAmbient(VertexShaderOutput input) : COLOR0 { return float4(Ka*globalAmbient + Ke,1) * tex2D(texSampler,input.Texture); } float4 PSDirectionalLight(VertexShaderOutput input) : COLOR0 { //Difuze float3 L = normalize(-lightDirection); float diffuseLight = max(dot(input.Normal,L), 0); float3 diffuse = Kd*lightColor*diffuseLight; //Specular float3 V = normalize(eyePosition - input.PositionO); float3 H = normalize(L + V); float specularLight = pow(max(dot(input.Normal,H),0),specularPower); if(diffuseLight<=0) specularLight=0; float3 specular = Ks * lightColor * specularLight; //sum all light components float3 light = diffuse + specular; return float4(light,1) * tex2D(texSampler,input.Texture); } technique MultiPassLight { pass Ambient { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PSAmbient(); } pass Directional { PixelShader = compile ps_3_0 PSDirectionalLight(); } } And here is how I actually apply my effects: public void ApplyLights(ModelMesh mesh, Matrix world, Texture2D modelTexture, Camera camera, Effect effect, GraphicsDevice graphicsDevice) { graphicsDevice.BlendState = BlendState.Opaque; effect.CurrentTechnique.Passes["Ambient"].Apply(); foreach (ModelMeshPart part in mesh.MeshParts) { graphicsDevice.SetVertexBuffer(part.VertexBuffer); graphicsDevice.Indices = part.IndexBuffer; // Texturing graphicsDevice.BlendState = BlendState.AlphaBlend; if (modelTexture != null) { effect.Parameters["Texture"].SetValue( modelTexture ); } graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); // Applying our shader to all the mesh parts effect.Parameters["WVP"].SetValue( world * camera.View * camera.Projection ); effect.Parameters["World"].SetValue(world); effect.Parameters["eyePosition"].SetValue( camera.Position ); graphicsDevice.BlendState = BlendState.Additive; // Drawing lights foreach (DirectionalLight light in DirectionalLights) { effect.Parameters["lightColor"].SetValue(light.Color.ToVector3()); effect.Parameters["lightDirection"].SetValue(light.Direction); // Applying changes and drawing them effect.CurrentTechnique.Passes["Directional"].Apply(); graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); } } I am also applying this when loading the effect: effect.Parameters["lightColor"].SetValue(Color.White.ToVector3()); effect.Parameters["globalAmbient"].SetValue(Color.White.ToVector3()); effect.Parameters["Ke"].SetValue(0.0f); effect.Parameters["Ka"].SetValue(0.01f); effect.Parameters["Kd"].SetValue(1.0f); effect.Parameters["Ks"].SetValue(0.3f); effect.Parameters["specularPower"].SetValue(100); Thank you very much UPDATE: I tried to load an effect for each model when drawing, but it doesn't seem to have changed anything. I suppose it is because XNA detects that the effect has already been loaded before and doesn't want to load a new one. Any idea why?

    Read the article

  • Problem Implementing Texture on Libgdx Mesh of Randomized Terrain

    - by BrotherJack
    I'm having problems understanding how to apply a texture to a non-rectangular object. The following code creates textures such as this: from the debug renderer I think I've got the physical shape of the "earth" correct. However, I don't know how to apply a texture to it. I have a 50x50 pixel image (in the environment constructor as "dirt.png"), that I want to apply to the hills. I have a vague idea that this seems to involve the mesh class and possibly a ShapeRenderer, but the little i'm finding online is just confusing me. Bellow is code from the class that makes and regulates the terrain and the code in a separate file that is supposed to render it (but crashes on the mesh.render() call). Any pointers would be appreciated. public class Environment extends Actor{ Pixmap sky; public Texture groundTexture; Texture skyTexture; double tankypos; //TODO delete, temp public Tank etank; //TODO delete, temp int destructionRes; // how wide is a static pixel private final float viewWidth; private final float viewHeight; private ChainShape terrain; public Texture dirtTexture; private World world; public Mesh terrainMesh; private static final String LOG = Environment.class.getSimpleName(); // Constructor public Environment(Tank tank, FileHandle sfileHandle, float w, float h, int destructionRes) { world = new World(new Vector2(0, -10), true); this.destructionRes = destructionRes; sky = new Pixmap(sfileHandle); viewWidth = w; viewHeight = h; skyTexture = new Texture(sky); terrain = new ChainShape(); genTerrain((int)w, (int)h, 6); Texture tankSprite = new Texture(Gdx.files.internal("TankSpriteBase.png")); Texture turretSprite = new Texture(Gdx.files.internal("TankSpriteTurret.png")); tank = new Tank(0, true, tankSprite, turretSprite); Rectangle tankrect = new Rectangle(300, (int)tankypos, 44, 45); tank.setRect(tankrect); BodyDef terrainDef = new BodyDef(); terrainDef.type = BodyType.StaticBody; terrainDef.position.set(0, 0); Body terrainBody = world.createBody(terrainDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = terrain; terrainBody.createFixture(fixtureDef); BodyDef tankDef = new BodyDef(); Rectangle rect = tank.getRect(); tankDef.type = BodyType.DynamicBody; tankDef.position.set(0,0); tankDef.position.x = rect.x; tankDef.position.y = rect.y; Body tankBody = world.createBody(tankDef); FixtureDef tankFixture = new FixtureDef(); PolygonShape shape = new PolygonShape(); shape.setAsBox(rect.width*WORLD_TO_BOX, rect.height*WORLD_TO_BOX); fixtureDef.shape = shape; dirtTexture = new Texture(Gdx.files.internal("dirt.png")); etank = tank; } private void genTerrain(int w, int h, int hillnessFactor){ int width = w; int height = h; Random rand = new Random(); //min and max bracket the freq's of the sin/cos series //The higher the max the hillier the environment int min = 1; //allocating horizon for screen width Vector2[] horizon = new Vector2[width+2]; horizon[0] = new Vector2(0,0); double[] skyline = new double[width]; //TODO skyline necessary as an array? //ratio of amplitude of screen height to landscape variation double r = (int) 2.0/5.0; //number of terms to be used in sine/cosine series int n = 4; int[] f = new int[n*2]; //calculating omegas for sine series for(int i = 0; i < n*2 ; i ++){ f[i] = rand.nextInt(hillnessFactor - min + 1) + min; } //amp is the amplitude of the series int amp = (int) (r*height); double lastPoint = 0.0; for(int i = 0 ; i < width; i ++){ skyline[i] = 0; for(int j = 0; j < n; j++){ skyline[i] += ( Math.sin( (f[j]*Math.PI*i/height) ) + Math.cos(f[j+n]*Math.PI*i/height) ); } skyline[i] *= amp/(n*2); skyline[i] += (height/2); skyline[i] = (int)skyline[i]; //TODO Possible un-necessary float to int to float conversions tankypos = skyline[i]; horizon[i+1] = new Vector2((float)i, (float)skyline[i]); if(i == width) lastPoint = skyline[i]; } horizon[width+1] = new Vector2(800, (float)lastPoint); terrain.createChain(horizon); terrain.createLoop(horizon); //I have no idea if the following does anything useful :( terrainMesh = new Mesh(true, (width+2)*2, (width+2)*2, new VertexAttribute(Usage.Position, (width+2)*2, "a_position")); float[] vertices = new float[(width+2)*2]; short[] indices = new short[(width+2)*2]; for(int i=0; i < (width+2); i+=2){ vertices[i] = horizon[i].x; vertices[i+1] = horizon[i].y; indices[i] = (short)i; indices[i+1] = (short)(i+1); } terrainMesh.setVertices(vertices); terrainMesh.setIndices(indices); } Here is the code that is (supposed to) render the terrain. @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. backgroundStage.draw(); backgroundStage.act(delta); uistage.draw(); uistage.act(delta); batch.begin(); debugRenderer.render(this.ground.getWorld(), camera.combined); batch.end(); //Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D); ground.dirtTexture.bind(); ground.terrainMesh.render(GL10.GL_TRIANGLE_FAN); //I'm particularly lost on this ground.step(); }

    Read the article

  • Integrated webcam in lenovo t410 not working with 12.04

    - by kristianp
    I have a Lenovo T410 with an inbuilt webcam and I haven't been able to get the webcam working. I tried skype, cheese, both just give me a black window. The microphone works fine with skype, by the way. Can anyone provide any clues please? The webcam is enabled in the bios, but there is no light indicating the webcam is on (not sure if there should be, though). I tried this on Kubuntu 11.10 and have upgraded to 12.04 with the same results. The Fn-F6 keyboard combination doens't seem to do anything either. EDIT: I got the webcam replaced, it looks like it was a hardware problem, because it works fine now. Thanks guys. $ ls /dev/v4l/* /dev/v4l/by-id: usb-Chicony_Electronics_Co.__Ltd._Integrated_Camera-video-index0 /dev/v4l/by-path: pci-0000:00:1a.0-usb-0:1.6:1.0-video-index0 And lsusb: $ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 004: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 005: ID 17ef:480f Lenovo Integrated Webcam [R5U877] Bus 002 Device 003: ID 05c6:9204 Qualcomm, Inc. Bus 002 Device 004: ID 17ef:1003 Lenovo Integrated Smart Card Reader Here is the output from guvcview, minus lots of lines describing the available capture formats. It says "unable to start with minimum setup. Please reconnect your camera.". guvcview 1.5.3 ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started video device: /dev/video0 Init. Integrated Camera (location: usb-0000:00:1a.0-1.6) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, .... { discrete: width = 1600, height = 1200 } Time interval between frame: 1/15, vid:17ef pid:480f driver:uvcvideo checking format: 1196444237 libv4l2: error setting pixformat: Device or resource busy VIDIOC_S_FORMAT - Unable to set format: Device or resource busy Init v4L2 failed !! Init video returned -2 trying minimum setup ... video device: /dev/video0 Init. Integrated Camera (location: usb-0000:00:1a.0-1.6) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } .... vid:17ef pid:480f driver:uvcvideo checking format: 1448695129 libv4l2: error setting pixformat: Device or resource busy VIDIOC_S_FORMAT - Unable to set format: Device or resource busy Init v4L2 failed !! ERROR: Minimum Setup Failed. Exiting... VIDIOC_REQBUFS - Failed to delete buffers: Invalid argument (errno 22) cleaned allocations - 100% Closing portaudio ...OK Terminated.

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • How to ace Skype Interviews

    - by FelixWehmeyer
    Many companies these days opt to include a Skype interview in the recruitment process, as it comes close to a face-to-face interview without the time and costs involved for both the company and the candidate. In some cases during the recruitment process at Oracle you also might be asked to conduct a Skype interview. To help you get started with this, we researched some websites to give you several tips and tricks. What most of the bloggers say about this topic is collected in this article to help you prepare. It is all about Technology The bit that can make a Skype interview more complicated than a face-to-face or phone interview is the fact you are using additional technology. Always check the video and audio capabilities of your computer to make sure they work properly. Be prepared for connections to be limited during the interview. Using a webcam can also be confusing, if you do not have a lot of experience using it. Make sure you look at the camera and not the monitor to avoid the impression you are looking away. Practice If you do not feel comfortable using the camera, do a mock interview with a friend or family member before you have the actual interview. Be aware that facial impressions or reactions come across differently on a monitor, so make sure to practice how you  come across during the interview. Good lighting in the room also helps you make you look the best for the interviewer. You and your room Dress code, as in any face-to-face interview,is important to think about. Dress the same way as you would for face-to-face interviews and avoid patterns or informal clothing. Another tip,is to be aware of your surroundings. Make sure the room you use looks good on camera, making sure it is neat and tidy, also think about how the walls look behind you. Also make sure you do not get distracted during the interview by anyone or anything, as this will directly have an impact on your interview and your ability to focus and concentrate. What is in a name What goes for any account that you share during the recruitment process, either your email address or Skype name, is to make sure it comes across as professional. Try to avoid using nicknames or strange words in your accounts, stick to using a first name – last name or an abbreviation of the same. If you would like to read more about this topic, have a look at the links below which we used as inspiration for this blog article. 7 Deadly Skype Interview Sins is fun to read and to gives you some good advice to keep in mind. ·         http://www.inc.com/guides/201103/4-tips-for-conducting-a-job-interview-using-skype.html ·         http://blog.simplyhired.com/2012/05/5-tips-to-a-great-skype-interview.html ·         http://www.cnn.com/2011/LIVING/07/11/skype.interview.tips.cb/index.html http://www.ehow.com/how_5648281_prepare-skype-interview.html

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • Is it possible for a WPF control to have an ActualWidth and ActualHeight if it has never been render

    - by DanM
    I need a Viewport3D for the sole purpose of doing geometric calculations using Petzold.Media3D.ViewportInfo. I do now want to place it in a Window. I'm creating a Viewport3D using the following code: private Viewport3D CreateViewport(MainSettings settings) { var cameraPosition = new Point3D(0, 0, settings.CameraHeight); var cameraLookDirection = new Vector3D(0, 0, -1); var cameraUpDirection = new Vector3D(0, 1, 0); var camera = new PerspectiveCamera { Position = cameraPosition, LookDirection = cameraLookDirection, UpDirection = cameraUpDirection }; var viewport = new Viewport3D { Camera = camera, Width = settings.ViewportWidth, Height = settings.ViewportHeight }; return viewport; } Later, I'm attempting to use this viewport to convert the mouse location to a 3D location using this method: public Point3D? Point2dToPoint3d(Point point) { var range = new LineRange(); var isValid = ViewportInfo.Point2DtoPoint3D(_viewport, point, out range); if (isValid) return range.PointFromZ(0); else return null; } Unfortunately, it's not working. I think the reason is that the ActualWidth and ActualHeight of the viewport and both zero (and these are read-only properties, so I can't set them manually). (I have tested the exact same with an actual rendered Viewport3D, so I know the issue is not with my converter method.) Any idea how I can get WPF to assign the ActualWidth and ActualHeight based on my Width and Height settings? I tried setting the HorizontalAlignment and VerticalAlignment to Left and Top, respectively, and I also messed with the MinWidth and MinHeight, but none of these properties had any effect on the ActualWidth or ActualHeight.

    Read the article

  • Select videos using UIImagePickerController in 2G/3G

    - by Raj
    Hi, I am facing a problem where-in I cannot select videos from the photo album in iPhone 2G/3G device. The default photos application does show videos and is capable of playing them, which in turn means that UIImagePickerController should clearly be capable of showing videos in photo album and selecting them. I have coded this to determine whether the device is capable of snapping a photo, recording video, selecting photos and selecting videos: // Check if camera and video recording are available: [self setCameraAvailable:NO]; [self setVideoRecordingAvailable:NO]; [self setPhotoSelectionAvailable:NO]; [self setVideoSelectionAvailable:NO]; // For live mode: NSArray *availableTypes = [UIImagePickerController availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera]; NSLog(@"Available types for source as camera = %@", availableTypes); if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { if ([availableTypes containsObject:(NSString*)kUTTypeMovie]) [self setVideoRecordingAvailable:YES]; if ([availableTypes containsObject:(NSString*)kUTTypeImage]) [self setCameraAvailable:YES]; } // For photo library mode: availableTypes = [UIImagePickerController availableMediaTypesForSourceType:UIImagePickerControllerSourceTypePhotoLibrary]; NSLog(@"Available types for source as photo library = %@", availableTypes); if ([availableTypes containsObject:(NSString*)kUTTypeImage]) [self setPhotoSelectionAvailable:YES]; if ([availableTypes containsObject:(NSString*)kUTTypeMovie]) [self setVideoSelectionAvailable:YES]; The resulting logs for 3G device is as follows: 2010-05-03 19:09:09.623 xyz [348:207] Available types for source as camera = ( "public.image" ) 2010-05-03 19:09:09.643 xyz [348:207] Available types for source as photo library = ( "public.image" ) As the logs state, for photo library the string equivalent of kUTTypeMovie is not available and hence the UIImagePickerController does not show up (or rather throws exception if we set the source types array which includes kUTTypeMovie) the movie files in photo library. I havent tested for 3GS, but I am sure that this problem does not exist in it with reference to other threads. I have built the app for both 3.0 (base SDK) and 3.1 but with the same results. This issue is already discussed in the thread: http://www.iphonedevsdk.com/forum/iphone-sdk-development/36197-uiimagepickercontroller-does-not-show-movies-albums.html But it does not seem to host a solution. Any solutions to this problem? Thanks and Regards, Raj Pawan

    Read the article

  • How to pass SOAP headers into python SUDS that are not defined in WSDL file

    - by chrissygormley
    Hello, I have a camera on my network, I am trying to connect to it with suds but suds doesn't send all the information needed. I need to put extra soap headers not defined in the WSDL file so the camera can understand the message. All the headers are contained in a SOAP envelope and then the suds command be in the body of the message. I have checked the suds website and it says to pass in the headers like so: from suds.sax.element import Element client = client(url) ssnns = ('ssn', 'http://namespaces/sessionid') ssn = Element('SessionID', ns=ssnns).setText('123') client.set_options(soapheaders=ssn) result = client.service.addPerson(person) Now I am not sure how I would implement this, say for example I have the below header: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" xmlns:SOAP ENC="http://www.w3.org/2003/05/soap-encoding" xmlns:p1="http://www.website.org/ver10/p/wsdl"> .<SOAP-ENV:Header> Using this or a similar example does anyone know hos I would get this passed into the soap command so my camera understands? Thanks

    Read the article

  • Resizing and saving an image in WinMobile and .NET CF throws OutOfMemoryException

    - by devguy
    I have a WinMobile app which allows the user the snap a photo with the camera, and then use for for various things. The photo can be snapped at 1600x1200, 800x600 or 640x480, but it must always be resized to 400px for the longest size (the other is proportional of course). Here's the code: private void LoadImage(string path) { Image tmpPhoto = new Bitmap(path); // calculate new bitmap size... double width = ... double height = ... // draw new bitmap Image photo = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format24bppRgb); using (Graphics g = Graphics.FromImage(photo)) { g.FillRectangle(new SolidBrush(Color.White), new Rectangle(0, 0, photo.Width, photo.Height)); int srcX = (int)((double)(tmpPhoto.Width - width) / 2d); int srcY = (int)((double)(tmpPhoto.Height - height) / 2d); g.DrawImage(tmpPhoto, new Rectangle(0, 0, photo.Width, photo.Height), new Rectangle(srcX, srcY, photo.Width, photo.Height), GraphicsUnit.Pixel); } tmpPhoto.Dispose(); // save new image and dispose photo.Save(Path.Combine(config.TempPath, config.TempPhotoFileName), System.Drawing.Imaging.ImageFormat.Jpeg); photo.Dispose(); } Now the problem is that the app breaks in the photo.Save call, with an OutOfMemoryException. And I don't know why, since I dispose the tempPhoto (with the original photo from the camera) as soon as I can, and I also dispose the Graphics obj. Why does this happen? It seems impossible to me that one can't take a photo with the camera and resize/save it without making it crash :( Should I restor t C++ for such a simple thing? Thanks.

    Read the article

  • error insert text in papervision typography class

    - by safeDomain
    hi evryone , i am encounter with a small problem i want to make a 3d rtl text animation with papervision this code generet a problem to this : [Fault] exception, information=TypeError: Error #1009: Cannot access a property or method of a null object reference. but when using a english text this error dont genereta my code : package { import flash.display.Sprite; import flash.events.Event; import org.papervision3d.scenes.Scene3D import org.papervision3d.view.Viewport3D import org.papervision3d.cameras.Camera3D import org.papervision3d.render.BasicRenderEngine import org.papervision3d.typography.Font3D import org.papervision3d.typography.fonts.HelveticaBold import org.papervision3d.typography.Text3D import org.papervision3d.materials.special.Letter3DMaterial import flash.text.engine.FontDescription import flash.text.engine.ElementFormat import flash.text.engine.TextElement import flash.text.engine.TextBlock import flash.text.engine.TextLine /** * ... * @author vahid */ public class Main extends Sprite { private var fd:FontDescription private var ef:ElementFormat private var te:TextElement protected var st:String; private var scene:Scene3D private var view:Viewport3D private var camera:Camera3D private var render:BasicRenderEngine private var vpWidth:Number = stage.stageWidth; private var vpHeight:Number = stage.stageHeight; private var text3d:Text3D private var font3d:Font3D //private var font:HelveticaBold private var textMaterial:Letter3DMaterial private var text:String public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // rtl block fd = new FontDescription () ef = new ElementFormat (fd) te = new TextElement ("?????? ?????? ???? ?????? ?? papervision", ef) text = te.text //3d block scene = new Scene3D () view = new Viewport3D (vpWidth,vpHeight,true,true,false,false) camera = new Camera3D () render = new BasicRenderEngine() addChild (view) this.addEventListener (Event.ENTER_FRAME , renderThis) textMaterial = new Letter3DMaterial(0xFF0000,1) font3d = new HelveticaBold() text3d = new Text3D (text, font3d, textMaterial) scene.addChild (text3d) } protected function renderThis(e:Event):void { text3d.rotationY +=5 render.renderScene(scene,camera,view) } } } i am using flashdevelop. please help me thank's

    Read the article

  • Certain transformations in Open Inventor(Coin3D)

    - by Marc
    Hi, I am quite new to Open Inventor(Coin3D) and have the following problem: I have a SoSelection holding a root node(also SoSeparator). And the root node holds a number of SoSeparator nodes. Each of these SoSeparator nodes holds a SoTransform node and a SoCube node. When I select one cube node I want all other cubes within a certain distance to the selected cube to arrange in a circle arround the selected cube. (Moreover all of the cubes should be on a plane than) An additional information: My cubes are always oriented in the camera direction with (cubeTransform_-rotation.connectFrom(&camera_-orientation) Assuming the selected cube is the center of the circle, how do I translate the other cubes in a circle on a plane(perpendicular to the vector between the selected cube and the camera)? Especially how do I find coordinates on the plain on which the circle should be which have a certain distance from the Axis (from center cube to camera). What I already did is to search for the for all cubes within a certain distance as soon as one cube is selected. As a result I already have the required separators (which are holding the according SoTransforms and SoCubes) in a SoPathList. Now I want to arrange the cubes by modifing the according SoTransform-translation values. Regards Mark

    Read the article

  • Alternative to as3isolib?

    - by tedw4rd
    Hi everyone, I've been working on a Flash game that involves an isometric space. I've been using as3isolib for a while now, and I'm less than impressed with how easy it is to use. Whether I'm approaching it the wrong way or it's just not that great to use is a question for another post. Anyways, I've been thinking of a different way to approach the problem of isometric positions, and I think I've got an idea that might work. Essentially, each object that is to be rendered to the iso-space maintains a 3-coordinate position. Those items are then registered with a camera that projects that 3-coordinate position to a 2-coordinate point on the screen according to the math on this Wikipedia article. Then, the MovieClip is added to the stage (or to the camera's MovieClip, perhaps) at that point, and at a child index of the point's y-value. That way, I figure objects that are closer to the camera will be "above" the objects further away, and will get rendered over them. So my question, then, is two-fold: Do you think this idea will work the way I think it will? Are there any existing 3D matrix/vector packages that I should look at? I know there's a Matrix3 class in Flex 3, but we're not using Flex for this game. Thanks!

    Read the article

  • Loading 2 Singletons With Dependencies when an app is opened (appDelegate / appDidBecomeActive) iPhone SDK

    - by taber
    I'm trying to load two standard-issue style singletons: http://cocoawithlove.com/2008/11/singletons-appdelegates-and-top-level.html when my iPhone app is loaded. Here's my code: - (void) applicationDidFinishLaunching:(UIApplication *)application { // first, restore user prefs [AppState loadState]; // then, initialize the camera [[CameraModule sharedCameraModule] initCamera]; } My "camera module" has code that checks a property of the AppState singleton. But I think what's happening is a race condition where the camera module singleton is trying to access the AppState property while it's in the middle of being initialized (so the property is nil, and it's re-initializing AppState). I'd really like to keep these separate, instead of just throwing one (or both) into the App Delegate. Has anyone else seen something like this? What kind of workaround did you use, or what would you suggest? Thanks in advance! Here's the loadState method: + (void)loadState { @synchronized([AppState class]) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *file = [documentsDirectory stringByAppendingPathComponent:@"prefs.archive"]; Boolean saveFileExists = [[NSFileManager defaultManager] fileExistsAtPath:file]; if(saveFileExists) { sharedAppState = [[NSKeyedUnarchiver unarchiveObjectWithFile:file] retain]; } else { [AppState sharedAppState]; } } }

    Read the article

  • Large scale perspective lights casting shadow maps, in the most optimized way?

    - by meds
    I'm using projected texture shadows coupled with lights to light a large sports field at night. To do this I'm using shadow cameras which I place in the position of the stadiums lights and shine it down on the field at the appropriate angle. The problem with this method is the textures to which I render the shadows into have to be very large so they can keep sufficient detail over the entire stadium. This is incredibly under optimized since at any given point the players attention is only directed on a small portion of the field meaning large chunks of the texture just take up space wit no benefits. However the issue is the lights need to be perspective based as they come from actual directional lights hovering over the stadium. The way to solve this, I believe, is to figure out in the shadow cameras view matrix it would be to place the actual camera to render from, and adjust the view matrix accordingly to the position it is. So my question is, how can I calculate the optimal position to put the shadow camera and calculate its view matrix such that the shadows it projects will appear to be coming from the light source rather than the camera?

    Read the article

  • determining if value is in range with 0=360 degree problem.

    - by Raven
    Hi, I am making a piece of code for DirectX app. It's meaning is to not show faces that are not visible. Normaly it would be just using Z-buffer, but I'm making many moves and rotations of mesh, so I would like to not do them and save computing power. I will describe this on cube. You are looking from the front so you see just one face and you don't need to rotate the 5 that left. If you would have one side of cube from 100*100 meshes, it would be great to not have to turn around 50k meshes that you really don't need. So I have stored X,Y,Z rotation of camera(the Z rotation I'm not using), and also X,Y,Z rotation of faces. In this cube simplified I would see faces that makes this statement true: cRot //camera rotation in degrees oRot //face rotation in degrees if(oRot.x > cRot.x-90 && oRot.x < cRot.x+90 && oRot.y > cRot.y-90 && oRot.y < cRot.y+90) But there comes a problem. If I will rotate arround, the camera can get to value 330 for exapmple. In this state, I would see front and right side of cube. Right side have rotation 270 so that's allright in IF statement. Problem is with 0 rotation of front face, which is also 360 degrees. So my question is how to make this statement to work, because when I use modulo, it will be failing for that right side and in this way it won't work for 0=360.

    Read the article

  • How to Transfer All Your Information to a New PS3: Video Tutorial

    - by Justin Garrison
    We have already shown you the steps needed to transfer all your information to a new PS3, but for those of you who would like to see the whole process from start to finish we put together this video. If you need any clarification on the steps involved don’t forget to check out the original post with more details. How to Transfer All Your Information to a New PS3 Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper]

    Read the article

  • The Open Road Awaits [Wallpaper]

    - by Asian Angel
    ROAD TO PARADISE [DesktopNexus] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • Low process priority problem

    - by Svepe
    I have just set Ubuntu 12.04 64bit with Cinnamon desktop and 3.5.0-030500 kernel on my new laptop with IvyBridge i7. I decided to test its performance by running a single threaded CPU-hungry program that I often use for camera calibration. Unfortunately, it ended up running much slower than I have ever expected. After some investigation it turned out that the program priority is automatically changed from normal to low which makes the program even slower. I have also noticed that all user programs such as Skype and Firefox are set to low priority. I tried manually resetting the priority to normal or even very high using the "renice" command, which works temporary until the kernel scheduler (I guess) resets the priority to low. Is this a normal behaviour and how can I overcome the problem with slowing down the execution? P.S. I also tried with the 3.2 kernel, but the problem is still present.

    Read the article

  • eGalax Touchscreen not working Jolicloud 1.2

    - by craigsmith86
    I have an eGalax touchscreen on an Acer Aspire One running Jolicloud 1.2 I have had success getting this touchscreen to work correctly on ubuntu 10.04NBR, 11.04 and Kubuntu 12.04 and Puppy Linux, so I am pretty happy with how it SHOULD be done. However, I cannot get it to calibrate correctly or remember calibration settings. I have installed the eGalax utility (all available versions) and it does not recognize the screen. Xinput_calibrator works but the config cannot be made permanent. Problems I have identified: -Joli doesn't have an xorg.conf file and does not use xorg.conf.d for evdev configuration -Setting configs through Hal doesn't work anymore The best I can get is a poorly adjusted touchscreen with a reversed Y axis. Any help greatly appreciated

    Read the article

  • How to calibrate Toshiba LED screen - the colours are over saturated

    - by user94369
    I have purchased a new Toshiba Satellite C850-A812 laptop, with ATI Radeon HD 75xx series. When i first installed Ubuntu 12.04.1, the open source driver worked like a charm with the colors, i mean the colors were so vivid. However after installing the ATI proprietary driver from the repos, the colors went too "whity" and too bright. I even installed the updated driver, and went further to download the latest driver from ATI and install it in vain. I tried to use the ubuntu build in calibration tool, but it keeps on asking me to enter the ICC profile before i proceed to calibrate. I played with the catalyst control center, tried everything possible, but still the colors are still wayyyyyyyyy over exposed. Please advice :)

    Read the article

  • making "xwacom set" changes permanent

    - by Philippe
    On my Thinkpad X220 Tablet the touch-finger works flawlessly but the pen is terribly miscalibrated. The calibration tool in System settings - Wacom tablet does not work. Instead, whenever I wan't to use the pen I first need to sudo xsetwacom set 'Wacom ISDv4 E6 Pen stylus' Area 0 0 27760 15690 These changes do not remain permanent. That is, after every reboot they are gone. How can I make the change permanent - I'm not looking for a startup scrip, I'd like to set the right area once for all. Any idea?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >