Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 20/84 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • GLSL billboard move center of rotation

    - by Jacob Kofoed
    I have successfully set up a billboard shader that works, it can take in a quad and rotate it so it always points toward the screen. I am using this vertex-shader: void main(){ vec4 tmpPos = (MVP * bufferMatrix * vec4(0.0, 0.0, 0.0, 1.0)) + (MV * vec4( vertexPosition.x * 1.0 * bufferMatrix[0][0], vertexPosition.y * 1.0 * bufferMatrix[1][1], vertexPosition.z * 1.0 * bufferMatrix[2][2], 0.0) ); UV = UVOffset + vertexUV * UVScale; gl_Position = tmpPos; BufferMatrix is the model-matrix, it is an attribute to support Instance-drawing. The problem is best explained through pictures: This is the start position of the camera: And this is the position, looking in from 45 degree to the right: Obviously, as each character is it's own quad, the shader rotates each one around their own center towards the camera. What I in fact want is for them to rotate around a shared center, how would I do this? What I have been trying to do this far is: mat4 translation = mat4(1.0); translation = glm::translate(translation, vec3(pos)*1.f * 2.f); translation = glm::scale(translation, vec3(scale, 1.f)); translation = glm::translate(translation, vec3(anchorPoint - pos) / vec3(scale, 1.f)); Where the translation is the bufferMatrix sent to the shader. What I am trying to do is offset the center, but this might not be possible with a single matrix..? I am interested in a solution that doesn't require CPU calculations each frame, but rather set it up once and then let the shader do the billboard rotation. I realize there's many different solutions, like merging all the quads together, but I would first like to know if the approach with offsetting the center is possible. If it all seems a bit confusing, it's because I'm a little confused myself.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • HLSL How to flip geometry horizontally

    - by cubrman
    I want to flip my asymmetric 3d model horizontally in the vertex shader alongside an arbitrary plane parallel to the YZ plane. This should switch everything for the model from the left hand side to the right hand side (like flipping it in Photoshop). Doing it in pixel shader would be a huge computational cost (extra RT, more fullscreen samples...), so it must be done in the vertex shader. Once more: this is NOT reflection, i need to flip THE WHOLE MODEL. I thought I could simply do the following: Turn off culling. Run the following code in the vertex shader: input.Position = mul(input.Position, World); // World[3][0] holds x value of the model's pivot in the World. if (input.Position.x <= World[3][0]) input.Position.x += World[3][0] - input.Position.x; else input.Position.x -= input.Position.x - World[3][0]; ... The model is never drawn. Where am I wrong? I presume that messes up the index buffer. Can something be done about it? P.S. it's INSANELY HARD to format code here. Thanks to Panda I found my problem. SOLUTION: // Do thins before anything else in the vertex shader. Position.x *= -1; // To invert alongside the object's YZ plane.

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • Handling cameras in a large scale game engine

    - by Hannesh
    What is the correct, or most elegant, way to manage cameras in large game engines? Or should I ask, how does everybody else do it? The methods I can think of are: Binding cameras straight to the engine; if someone needs to render something, they bind their own camera to the graphics engine which is in use until another camera is bound. A camera stack; a small task can push its own camera onto the stack, and pop it off at the end to return to the "main" camera. Attaching a camera to a shader; Every shader has exactly one camera bound to it, and when the shader is used, that camera is set by the engine when the shader is in use. This allows me to implement a bunch of optimizations on the engine side. Are there other ways to do it?

    Read the article

  • Best way to mask 2D sprites in XNA?

    - by electroflame
    I currently am trying to mask some sprites. Rather than explaining it in words, I've made up some example pictures: The area to mask (in white) Now, the red sprite that needs to be cropped. The final result. Now, I'm aware that in XNA you can do two things to accomplish this: Use the Stencil Buffer. Use a Pixel Shader. I have tried to do a pixel shader, which essentially did this: float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex = tex2D(BaseTexture, texCoord); float4 bitMask = tex2D(MaskTexture, texCoord); if (bitMask.a > 0) { return float4(tex.r, tex.g, tex.b, tex.a); } else { return float4(0, 0, 0, 0); } } This seems to crop the images (albeit, not correct once the image starts to move), but my problem is that the images are constantly moving (they aren't static), so this cropping needs to be dynamic. Is there a way I could alter the shader code to take into account it's position? Alternatively, I've read about using the Stencil Buffer, but most of the samples seem to hinge on using a rendertarget, which I really don't want to do. (I'm already using 3 or 4 for the rest of the game, and adding another one on top of it seems overkill) The only tutorial I've found that doesn't use Rendertargets is one from Shawn Hargreaves' blog over here. The issue with that one, though is that it's for XNA 3.1, and doesn't seem to translate well to XNA 4.0. It seems to me that the pixel shader is the way to go, but I'm unsure of how to get the positioning correct. I believe I would have to change my onscreen coordinates (something like 500, 500) to be between 0 and 1 for the shader coordinates. My only problem is trying to work out how to correctly use the transformed coordinates. Thanks in advance for any help!

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • FBO rendering different result between Glaxay S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • Developing GLSL Shaders?

    - by skln
    I want to create shaders but I need a tool to create and see the visual result before I put them into my game. As to determine if there is something wrong with my game or if it's something with the shader I created. I've looked at some like Render Monkey and OpenGL Shader Designer from what I recall of Render Monkey it had a way to define your own attributes (now as "in" for vertex shaders = 330) easily though I can't remember to what extent. Shader Designer requires a plugin that I didn't even bother to look at creating cause it's an external process and plugin. Are there any tools out there that support a scripting language and I could easily provide specific input such as float movement = sin(elapsedTime()); and then define in float movement; in the vertex shader ? It'd be cool if anyone could share how they develop shaders, if they just code away and then plug it into their game hoping to get the result they wanted.

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • Search BitmapData object for matching pixel values from another Bitmap.

    - by Cos
    Using Actionscript 3 is there a way to search one bitmap for the coordinates matching pixels of another bitmap? http://dl.dropbox.com/u/1914/wired.png Somehow you would have to loop through the bigger bitmap to find and the the pixel range that matches and return those coordinates. For example the Bitmap with the "E" is 250 pixels over and 14 pixels down in the bigger bitmap. I haven't been able to come up with the solution on my own. Thanks.

    Read the article

  • Most optimal way to detect if black (or any color pixels) exist in an image file?

    - by Zando
    What's the best and most flexible algorithm to detect any black (or colored pixel) in a given image file? Say I'm given an image file that could, say, have a blue background. And any non blue pixel, including a white pixel, is counted as a "mark". The function returns true if there are X number of pixels that deviate from each other at a certain threshold. I thought it'd be fastest to just simply iterate through every pixel and see if its color matches the last. But if it's the case that pixel (0,0) is deviant, and every other pixel is the same color (and I want to allow at least a couple deviated pixels before considering an image to be "marked), this won't work or be terribly efficient.

    Read the article

  • Is there any algorithm for finding LINES by PIXEL COLORS on picture?

    - by Ole Jak
    So I have Image like this I want to get something like this (I hevent drawn all lines I want but I hope you can get my idea) I need algorithm for finding all straight lines on it by just reading colors of pixels. No hard math, no Haar, no Hough. Some algorithm which would be based on points colors. I want to give to algorithm parameters like min line length and max line distortion. I want to get relative to picture pixel coords start and end points of lines. So I need algorithm for finding straight lines of different colors on picture. Algorithm which would be based on idea of image of different colors and Lines of static colors. Yes - such algorithm will not work for images with lots of shadows and lights. But It willl probably be fast (I hope so). Is there any such algorithm?

    Read the article

  • How do you draw a line on a canvas in WPF that is 1 pixel thick.

    - by xarzu
    The method for drawing a line on a canvas in WPF that uses the line class actually draws a line that is two pixels thick: Line myLine = new Line(); myLine.Stroke = System.Windows.Media.Brushes.Black; myLine.X1 = 100; myLine.X2 = 140; // 150 too far myLine.Y1 = 200; myLine.Y2 = 200; myLine.StrokeThickness = 1; graphSurface.Children.Add(myLine); Microsoft might have decided to set a standard for line thickness and the minimum is 2 pixels thick when you set the strockThickness to 1, but when you already have rectangles drawn in XAML and even error fonts using WingDings, it is an obvious miss-match. How do you draw a line that is truly 1 pixel thick?

    Read the article

  • Getting pixel averages of a vector sitting atop a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? (I tagged this as Python, which is preferred, but I'd be happy with the general algorithm!) I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • How would you build a "pixel perfect" GUI on Linux?

    - by splicer
    I'd like build a GUI where every single pixel is under my control (i.e. not using the standard widgets that something like GTK+ provides). Renoise is a good example of what I'm looking to produce. Is getting down to the Xlib or XCB level the best way to go, or is it possible to achieve this with higher level frameworks like GTK+ (maybe even PyGTK)? Should I be looking at Cairo for the drawing? I'd like to work in Python or Ruby if possible, but C is fine too. Thanks!

    Read the article

  • UIImageView is clipping a pixel of the bottom of my UIImage...?

    - by akaii
    I'm not sure what might be causing this, but UIImageView occasionally clips off about a pixel or 2 from the bottom of some square/rectangular UIImages I'm using as subviews for UITableViewCells. These UIImageViews are well within the borders of the cell, so it shouldn't be due to cliptobounds. There seems to be no consistency or pattern to which images are being clipped, nor when they're clipped, other than that it only happens to (or is only noticable in) square/rectangular icons, and only ones that are parented to UITableViewCells (or their subclasses). I'm having trouble reproducing the problem consistently, which is why I haven't posted any code this time. Has anyone encountered something similar to this before? I've encountered a similar bug that involved floating point values for origin/size being interpreted weirdly... but that doesn't seem to be the cause of this particular problem. I don't need a specific solution at this point, I'm just making sure I haven't missed any well-known bugs or documented problems that involve UIImageView.

    Read the article

  • Need to get pixel averages of a vector sitting on a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? I'm not even clear what this type of math is called. I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • SD card won't appear after upgrade to 13.10

    - by Pixel
    My SD card won't mount when I put it into my lap top, everything was fine before the upgrade. The information about the SD card appears just fine when I type "sudo fdisk l " it just says that it doesn't have a valid partition table. When I type "_sudo blkid" I get the following answer: /dev/sda1: UUID="CCA8-9030" TYPE="vfat" /dev/sda2: UUID="8a1d135b-384b-432d-b608-64dcf09ada24" TYPE="ext2" /dev/sda3: UUID="7s6PtU-kj2Z-N8XD-0mzl-840i-i3HG-enlbAf" TYPE="LVM2_member" /dev/sr0: LABEL="Bamboo CD" TYPE="iso9660" /dev/mapper/ubuntu--vg-root: UUID="c9b521c8-7c9f-493b-95c8-a7d79c465318" TYPE="ext4" /dev/mapper/ubuntu--vg-swap_1: UUID="7f155ab6-e1b9-485b-a2bc-443c0622284d" TYPE="swap" When I use lsusb: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 13d3:5710 IMC Networks UVC VGA Webcam Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 046d:c52f Logitech, Inc. Unifying Receiver Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub I've read the other threads and I couldn't really find any good answers, my card reader was compatible with the previous version of ubuntu, so technically it should still be compatible with the next version. Also I can't erase what's on the card, it contains important data which I need... :/ If you need anymore information just ask, I'll give it as soon as I can. Pixel.

    Read the article

  • Rendering shadow sprites in cocos2d-x

    - by lukeluke
    I am writing a 2D game with cocos2d-x. I want to put a "shadow" sprite on a background sprite using the equation: MAX(0, Cd*1 - Cs*S) where Cd is the destination color (that is, a background pixel), Cs is the source color (the shadow pixel) , S is the scale factor (between 0 and 1). The MAX() function is used to avoid negative results. This is a lighting effect: when the shadow sprite pixel is 0, there is no effect on the background pixel, otherwise, the background pixel becomes darker. Now, the only way that comes to my mind is to change the blending equation to GL_FUNC_SUBTRACT, but it doesn't compile with cocos2d-x (can't found it)... I would subclass the CCSprite class in order to implement the draw() method in order to change, when needed, the blending equation, call the original draw() method and restore the blending equation to its previous state at the end of the method. So my questions are two: how to use glBlendEquation() with cocos2d-x? Keep in mind that i am writing a game for iphone/android/windows. are shadows handled this way in 2D games? Thx

    Read the article

  • Changing Palette for Day/Light Mode using GIMP

    - by J.C.
    Hello, Suppose I've a picture, which want to achieve day/light mode by changing 8bpp color palette. If I want the pixel index of my picture is always fixed for both day mode and night mode. For example, the 1st pixel index is 100. Which I can look up index 100 in day mode palette and night mode palette. How can I use GIMP to do so? My goal is to not update my pixel index of my picture. Also, as you see in two palette, they are not one one mapping. That is index 1 of the day mode palette and index 1 of the night mode palette may not used in the same pixel of the picture, how can I tackle this problem? Actually, my use case is as follow I want to use one 8bpp picture to achieve day/night mode by update only the color palette (without updating the pixel index). The advantage is I only have to prepare 2 256 byte palette rather than saving 2 big pictures in my limited data ram. Thanks a lot

    Read the article

  • Xorg.conf (nvidia) Second Monitor getting settings of first

    - by HennyH
    I've been spending the weekend (and some time before that) trying to set up my Korean QHD270 and Benq G2222HDL monitors with Ubuntu 13.10. With the nouveau drivers install both monitor function perfectly fine. After installing the nvidia drivers the Benq works but the QHD270 does not. Now, after days of struggling I managed to get the QHD270 to work following a mixture of blogs, particularly; this one and learnitwithme. Now, unfortunatly my G2222HDL does not work. I fixed the QHD270 by supplying a custom EDID, my xorg.conf looks like so (excluding keyboard and mouse): Section "ServerLayout" Identifier "Layout0" Screen "Default Screen" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Device" Identifier "Configured Video Device" Driver "nvidia" Option "CustomEDID" "DFP:/etc/X11/edid-shimian.bin" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Now, I tried defining a new Device,Monitor and Screen then in ServerLayout adding Screen "Second Screen" RightOf "Default Screen", but after doing so neither monitor worked. Hoping to fix the issue using a GUI based tool I opened up NVIDIA X Server Settings, which shows my current layout as: It seems that something is being output to the monitor, as suggested by my print screen: Any help would be greatly appreciated. Output of xrandr: Screen 0: minimum 8 x 8, current 5120 x 1440, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ HDMI-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DVI-D-0 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ DP-1 disconnected (normal left inverted right x axis y axis) And an extract from my log file (perhaps this is relevant?) [ 7.862] (--) NVIDIA(0): Valid display device(s) on GeForce GTX 680 at PCI:2:0:0 [ 7.862] (--) NVIDIA(0): CRT-0 [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0) (boot, connected) [ 7.862] (--) NVIDIA(0): DFP-1 [ 7.862] (--) NVIDIA(0): DFP-2 [ 7.862] (--) NVIDIA(0): DFP-3 [ 7.862] (--) NVIDIA(0): DFP-4 [ 7.862] (--) NVIDIA(0): CRT-0: 400.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): Internal Dual Link TMDS [ 7.862] (--) NVIDIA(0): DFP-1: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-1: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-2: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-2: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-3: 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-3: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-4: 960.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-4: Internal DisplayPort

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >