Search Results

Search found 21470 results on 859 pages for 'computer graphics'.

Page 15/859 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Connecting a 2560x1440 display to a laptop?

    - by tjollans
    Having read Jeff Atwood's blog post on Korean 27" IPS LCDs, I've been wondering to what extent these are useful in a notebook + large display situation. I own a Lenovo Thinkpad Edge E320 with 2nd gen. integrated Intel graphics. According to the spec from Intel, this should support HDMI version 1.4, and, using DisplayPort, resolutions up to 2560x1600. HDMI version 1.4 supports resolutions up to 4096×2160, however, according to c't (German), the HDMI interface used with Intel chips only supports 1920x1200. The same goes for the DVI output - dual-link DVI-D, apparently, is not supported by Intel. It would appear that my laptop cannot digitally drive this kind of resolution. Now what about other laptops? According to the article in c't above, AMD's integrated graphics chips have the same limitation as Intel's. NVIDIA graphics cards, apparently, only offer resolutions up to 1900x1200 over HDMI out of the box, but it's possible, when using Linux at least, to trick the driver into enabling higher resolutions. Is this still true? What's the situation on Windows and OSX? I found no information on whether discrete AMD chips support ultra-high resolutions over HDMI. Owners of laptops with (Mini) DisplayPort / Thunderbolt won't have any issues with displays this large, but if you're planning to go for a display with dual-link DVI-D input only (like the Korean ones), you're going to need an adapter, which will set you back something like €70-€100 (since the protocols are incompatible). The big question mark in this equation is VGA: a lot of laptops have it, and I don't see any reason to think this resolution is not supported by the hardware (an oft-quoted figure appears to be 2048x1536@75Hz, so 2560x1440@60Hz should be possible, right?), but are the drivers likely to cause problems? Perhaps more critically, you'd need a VGA to dual-link DVI-D adapter that converts analog to digital signals. Do these exist? How good are they? How expensive are they? Is there a performance penalty involved? Please correct me if I'm wrong on any points. In summary, what are the requirements on a laptop to drive an external LCD at 2560x1440, in particular one that supports dual-link DVI-D only, and what tools and adapters can be used to lower the bar?

    Read the article

  • Where Catalyst stores applist for switchable graphics?

    - by noober
    I cannot add an app to the list to manually set it to high performance (Radeon instead of Intel HD). When I browse for an exe, nothing happens, the list is still empty. So, maybe I can edit some .cfg or .ini? UPDATE This is NOT my screenshot... actually I've found it on the Net. The list with iexplore.exe is what I meant. When I click 'Browse' and choose any exe (Portal2.exe, for instance) nothing happens. The list is empty, so I cannot set mode for Portal2.exe.

    Read the article

  • ATI / AMD HIS HD 7870 Graphics card fan speed below 16% / 20% 26%

    - by Thorsten Niehues
    I bought a AMD / ATI HIS HD 7870 to replace my old HD 4870. I noticed that the fan speed does not scale with the temperature: The fan speed does not get below 28% (read from catalyst / automatic fan speed) If I manually change it in the catalyst to 20% then it has the same speed than 28% : about 900-1000 rpm. With HIS iTurbo i manually can change the fan speed below 20%. But I noticed that changing the fan speed below 16% results in 3200 rpm. This is really stupid and annoying since my PC is a ultra silent PC and all fans are running with about 500 rpm when the PC is idle (windows / musik movies, etc.) Is there any way to change the fan speed to a reasonable speed like 500 rpm by software or hardware adapters (I really don't like to put a poti between the 12V line)

    Read the article

  • Need a solution to stop students from rotating screen on laptops with intel graphics

    - by testguy
    We have some students who have figured out how to rotate the screen using either the hotkey combination or right click context menu. It's easy to fix but it's time consuming because no matter how many times I tell people how to fix it there's always someone that will come up to me. Now I need two things. First, is there a way to disable screen rotation? Second, I need a script to reset the screen resolution to normal on logon and logoff. The solutions need to be able to be deployed from a Win2003 server to WinXP clients. I have way too many computers to go through by hand to uncheck enable hot keys on the intel control panel.

    Read the article

  • Why are graphics cards upside-down?

    - by gbjbaanb
    This is something that has always bugged me - when I install a card into a desktop (ie mini tower) case, the fan is always facing down. Surely, making the card so the components and fan is on the top would help a lot with cooling, allowing those whiney fans to spin a little slower. I know some card manufacturers tried to mitigate this by adding heat pipes and big heatsinks on the back of the card.. but they still put the bits on the same way as everyone else! So, does anyone know why they're all upside-down?

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • How do install graphics drivers for radeon HD 6380g on a HP pavilion g6?

    - by Ryan
    I installed ubuntu from wubi using a live cd but when i booted i just got a black screen so i followed the instructions from this forum http://ubuntuforums.org/showpost.php?p=10089820&postcount=8 which successfully completed the ubuntu installation but now when i boot ubuntu i just get a command prompt. when i type in startx it comes up with an error. I am told i need to install graphics drivers, how do i do this? Thanks in advance Ryan

    Read the article

  • Graphics library used by Windows Vista Freecell and Solitaire

    - by David Grayson
    Does anyone know what graphics library is used to create the graphics in the Solitaire and Freecell games included with Windows Vista (e.g. XNA, GDI, WPF)? A good answer would include the name of the library and evidence. I looked at solitaire.exe with dependency walker and it shows many calls to gdi32.dll and gdiplus.dll, but also a call to Direct3DCreate9 in d3d9.dll.

    Read the article

  • Navigation graphics overlayed over video

    - by Hrishikesh Choudhari
    Hey, Imagine I have a video playing.. Can I have some sort of motion graphics being played 'over' that video.. Like say the moving graphics is on an upper layer than the video, which would be the lower layer.. I am comfortable in a C++ and Python, so a solution that uses these two will be highly appreciated.. Thank you in advance, Rishi..

    Read the article

  • Ubuntu: Graphics freeze

    - by Phil
    We have recently updated a java application which runs on an Ubuntu PC, and are now experiencing a graphics problem that we didn't encounter before. The system is running constantly, and randomly maybe twice a month but sometimes within a few days the systems graphics will freeze, and the gnome panels are frozen. Here is an extract from the syslog; Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970021] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970177] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 937626 at 937625)

    Read the article

  • C# Graphics without Windows form

    - by teishu
    Hi, Could someone provide an example of drawing graphics without a windows form? I have an app that doesn't have a console window or windows form, but i need to draw some basic graphics (lines and rectangles etc.) Hope that makes sense. Thanks in advance.. J

    Read the article

  • Graphics cards so I can have 4 monitors

    - by oshirowanen
    I currently have a single old graphics card to which I have connected 2 monitors giving me a big desktop of 2560x1024. If I get 2 of the following graphics card: http://www.ebuyer.com/238428-gigabyte-gts-450-1gb-gddr5-dual-dvi-mini-hdmi-out-pci-e-graphics-gv-n450-1gi Will I be able to connect to monitors per graphics card, giving me a total resolution of 5120x1024? I guess what I'm asking is, will I simply be able to stick both graphics cards in, plug the monitors in and will it all just work out of the box? I currently have 4 dvi monitors which have a native resolution of 1280x1024 each.

    Read the article

  • On a dual-GPU laptop, is using the discrete GPU ever more power efficient?

    - by Mahmoud Al-Qudsi
    Given a laptop with a dual integrated/discrete GPU configuration, is it ever more power efficient to use the discrete GPU instead of the integrated? Obviously when writing an email or working on a spreadsheet, the integrated GPU will always use less power. But let's say you're doing something graphics-medium but not graphics-intensive/heavy - is there a point where it actually makes sense to fire up the discrete GPU, not for performance but for power-saving reasons? Off the top of my head, I can think of a scenario where the external GPU supports hardware decoding of a particular video codec - I'd imagine there is a "price point" where using the GPU saves more energy than decoding that fully in software would. But I think most GPUs, integrated or discrete, pretty much decode just the plain-Jane h264. But maybe there is something more complicated, perhaps if you're doing something like desktop/windowing animations or a flash animation on a website (not an embedded flash video) - maybe the discrete GPU will use enough less power to make up for switching to it? I guess this question can be summed up as to whether or not you can say beyond doubt that if you don't care for performance on a laptop with two GPUs, always use the integrated GPU for maximum battery life.

    Read the article

  • Cast Graphics to Image in C#

    - by WebDevHobo
    I have a pictureBox on a Windows Form. I do the following to load a PNG file into it. Bitmap bm = (Bitmap)Image.FromFile("Image.PNG", true); Bitmap tmp; public Form1() { InitializeComponent(); this.tmp = new Bitmap(bm.Width, bm.Height); } private void pictureBox1_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImage(this.bm, new Rectangle(0, 0, tmp.Width, tmp.Height), 0, 0, tmp.Width, tmp.Height, GraphicsUnit.Pixel); } However, I need to draw things on the image and then have the result displayed again. Drawing rectangles can only be done via the Graphics class. I'd need to draw the needed rectangles on the image, make it an instance of the Image class again and save that to this.bm I can add a button that executes this.pictureBox1.Refresh();, forcing the pictureBox to be painted again, but I can't cast Graphics to Image. Because of that, I can't save the edits to the this.bm bitmap. That's my problem, and I see no way out.

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • Computer won't start after installing new video card

    - by Vercas
    So, 1 year and 340 days ago I bought a desktop computer. Since then, it has served me well. But lately, I wanted an upgrade, so I bought a new video card. I documented myself about the compatibility, and it is okay. So I opened the case, cleaned up that... dust elemental living inside of it. Unscrewed the plastic thingie on the outside to unscrew the old video card. Because of the stupid arrangement of the ports, I had to unscrew the motherboard to unplug it. So I unscrewed it, removed the old card, put in the new one, moved the motherboard back, screwed it back in, screwed the video card on the holder... thingie, and screwed the plastic thingie back in. Everything went smoothly, nothing had to be forced in/out. I connected the external power supply, closed the computer case, put the tower back in it's place and all the cables back in. When I pressed the power button, the LED turned... some color I can't distinguish. It stayed that way for a second, and then it went off. I tried a bunch of things, including permuting the external power supply arrangement (1 connection, 2 connections and no connections), with no success. And here are some of the specifications: Motherboard manufacturer: Asrock Processor: AMD Athlon II X2 3.0 GHz RAM: 2 x 2GB (had only 1 initially, bought the second plate a bit later) OLD video card: AMD Radeon HD 5450 NEW video card: Gigabyte nVidia GeForce GTX 650 GPU, 1GB GDDR5 128bit PCI-E, Dual-link DVI-Dx2 / HDMI / D-Sub Power supply: 450W + all the requirements I managed to find on the internet are met (+12V 18A or something) More specific information is stored... On that computer. If required, I may open the case again and read the stickers to find more specific information. I can also provide photos if necessary. Any ideas? Suggestions? Something? :|

    Read the article

  • How to rotate attached monitor to notebook of Sony's Z-series?

    - by user67175
    Is it possible that Sony is selling a top of the line, hugely expensive computer that does not have the basic ability to rotate an attached monitor? Is it possible that the Z-series simply can't do this? The Windows control panel is missing the normal option for "rotation", as is the Nvidia control panel for "orientation" , no additional rotation software works. Sony sales says they do not know the answer to this. Sony technical supports says that the problem lies with Nvidia, Nvidia technical supports says the problem lies with Sony. Any advice for a fix for this short of returning the computer would be greatly appreciated. Also wondering if this problem is common to computers running Windows 7?

    Read the article

  • Computer will freeze/ lock up after doing relatively stressful things

    - by GrowingCode247
    I'll first start off by saying that the issue GENERALLY doesn't occur unless I'm doing something remotely stressful for my computer. This issue used to occur whenever it felt it was necessary, however has not occurred completely randomly for a while now (thankfully) My computer's specs: CPU: AMD Phenom II X4 960T GPU: GeForce GTX 760 Memory: 16 GB RAM Resolution Used: 1680x1050, 59Hz (strange number for refresh rate?) res is highest for monitor Nvidia Driver version: 331.65 OS: Microsoft Windows 7 Ultimate (64-bit) Sometimes I will be able to go 2-3 games (about an hour, depending) and sometimes it will go maybe one game (20-30 minutes) and then my computer will run sluggishly and leave me unable to do much of anything. I can sometimes interact with programs at a very basic level (maximizing, minimizing), and I usually cannot close them in any way, not even through Task Manager. The highest temperature my GPU reaches is 76C, with the average being around 73C. During the time the temperatures are around 73C, my GPU's RAM usage is anywhere between 1250-1300 (out of 2GB). My CPU's temperature never goes over 60C, thankfully. The PSU should be fine. It's very mildly dusty but I feel as though that would not be causing this problem... I will clean it out as soon as everything else has been ruled out. Honestly I have no clue how to test the PSU for problems - same goes for my Motherboard. I cannot really think of what could be causing these freezes otherwise. Event Viewer details: EventID: 1 - VDS Basic Provider (I've no clue what this is) EventID: 3 - Kernel-EventTracing (Again, lost) EventID: 8003 - bowser (this seems fishy) and the one critical that I know others have been dealing with as I've browsed some other responses on the web: EventID: 41 - Kernel-Power any help to solve this problem would be GREATLY appreciated.

    Read the article

  • nvidia graphics resolution problem

    - by Deepak Adhikari
    I am currently using ubuntu 12.04 I have acer aspire timelinex 3830tg with 2GB nvidia GeForce GT540M graphics card To enable my graphics card I followed following steps. 1.) I activated nvidia_current and nvidia_current_updates from additional drivers 2.) sudo nvidia-xconfig 3.) then reboot Following these steps I got following errors 1.) my resolution is 640x480...(there is no option of 1366x768 in display...previously there was 1366x768 when nvidia-xconfig command was not entered) 2.) when I open nvidia-settings it shows me following error "You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run 'nvidia-xconfig' as root) and restart the X server." Problem need to be solved 1.) Change resolution to 1366x768 2.) Also how to check my nvidia graphics working or not Please some one please help me to solve these issues...I am seriously in need of my graphics card... I wan't my nvidia graphics card work as my intel graphics smoothly I am not willing to use bumblebee with regards, ubuntu user

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How can I configure the Aiptek T-6000U graphics tablet?

    - by mejpark
    I followed AiptekTablet instructions on the Ubuntu Wiki to configure 11.04 for use with my graphics tablet. I installed the xserver-xorg-input-aiptek package and created two files with the options detailed on the Wiki page above: $ cat /lib/udev/rules.d/69-xserver-xorg-input-aiptek.rules ACTION!="add|change", GOTO="xorg_aiptek_end" KERNEL!="event[0-9]*", GOTO="xorg_aiptek_end" ATTRS{idVendor}=="08ca", ENV{x11_driver}="aiptek", SYMLINK+="input/aiptektablet" LABEL="xorg_aiptek_end" $ cat /usr/share/X11/xorg.conf.d/50-aiptek.conf Section "InputClass" Identifier "pen" MatchProduct "Aiptek|AIPTEK|aiptek" MatchDevicePath "/dev/input/event*" Driver "aiptek" Option "USB" "on" Option "Type" "stylus" Option "Mode" "absolute" Option "zMin" "0" Option "zMax" "511" EndSection The 50-aiptek.conf file was originally called 10-aiptek.conf as in the Wiki, but an Aiptek tablet installation help thread on the Ubuntu Forums, suggested changing 10 to 50. Any ideas? Thank you.

    Read the article

  • How do I install the Intel 82845 graphics driver -- videos are really slow?

    - by Mahesh Bhat
    I installed lubuntu on my machine and seems Intel 82845 graphics wasn't installed. Videos are showing frame by frame. Many says ubuntu kernel has built in support for it. but seems it is not. There is a website www.intellinuxgraphics.org has drivers for many kinds of linux distributions. But I find it difficult how to install them on my lubuntu. Can anyone elabroate on how that can be installed ? Output of the command dmesg http://paste.ubuntu.com/1058720/ Output of the command lsmod http://paste.ubuntu.com/1058724/

    Read the article

  • how to install drivers for intel corporation mobile gm965 gl960 integrated graphics controller?

    - by SanraS
    I have a "inspiron 1525"and am running 11.10 version of Ubuntu, I noticed that when scrolling I can see a lag and also when playing back HD content it will freeze the video for a few seconds and then resume, none of those happened on winVista when I had it installed, now I do prefer Ubuntu and would like to fix this rather than go back to windows. the chip is an "intel corporation mobile gm965 gl960 integrated graphics controller" as stated by "lspci" I don't know much about dealing with installing drivers that's why I'm asking for help and would like to be commands that I can put on terminal, rather than go here and then there and then look for this or that. I think I'll get lost in the middle but terminal I can follow. thanks for your help.

    Read the article

  • Computer restarts without warning; code bcc116

    - by Robert C.
    Processor: Intel i5 4430 4-Core 4x3Ghz Motherboard: msi h87-g41 Graphics Card: Nvidia GTX760 Power supply: eps-750 cm RAM: 8GB I bought a new assembled gaming PC which worked fine for a few days. Then it started rebooting without warning. After it restarts windows 7 gives me an bbc 116 error code. Apparently it's something to do with my video card, either it overheating or wrong drivers. I've installed the latest driver from Nvidia for my graphics card. Since it's brand new it can't be dust, I'm running it with its lid open to see if the problem persists. I'm also running prime95 now to see if it tells me anything else. Using core temp it tells me that my CPU reaches up to 95° celsius with the blend stress test from prime95. Aaaand it just peaked to 100°. Of course it doesn't reach these temperatures at all while idle/gaming. I'm gonna let prime95 run for a night and to see what happens. Until then does anyone know what I should do next?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >