Search Results

Search found 6029 results on 242 pages for 'discrete graphics'.

Page 13/242 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Need a solution to stop students from rotating screen on laptops with intel graphics

    - by testguy
    We have some students who have figured out how to rotate the screen using either the hotkey combination or right click context menu. It's easy to fix but it's time consuming because no matter how many times I tell people how to fix it there's always someone that will come up to me. Now I need two things. First, is there a way to disable screen rotation? Second, I need a script to reset the screen resolution to normal on logon and logoff. The solutions need to be able to be deployed from a Win2003 server to WinXP clients. I have way too many computers to go through by hand to uncheck enable hot keys on the intel control panel.

    Read the article

  • Why are graphics cards upside-down?

    - by gbjbaanb
    This is something that has always bugged me - when I install a card into a desktop (ie mini tower) case, the fan is always facing down. Surely, making the card so the components and fan is on the top would help a lot with cooling, allowing those whiney fans to spin a little slower. I know some card manufacturers tried to mitigate this by adding heat pipes and big heatsinks on the back of the card.. but they still put the bits on the same way as everyone else! So, does anyone know why they're all upside-down?

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • How do install graphics drivers for radeon HD 6380g on a HP pavilion g6?

    - by Ryan
    I installed ubuntu from wubi using a live cd but when i booted i just got a black screen so i followed the instructions from this forum http://ubuntuforums.org/showpost.php?p=10089820&postcount=8 which successfully completed the ubuntu installation but now when i boot ubuntu i just get a command prompt. when i type in startx it comes up with an error. I am told i need to install graphics drivers, how do i do this? Thanks in advance Ryan

    Read the article

  • Graphics library used by Windows Vista Freecell and Solitaire

    - by David Grayson
    Does anyone know what graphics library is used to create the graphics in the Solitaire and Freecell games included with Windows Vista (e.g. XNA, GDI, WPF)? A good answer would include the name of the library and evidence. I looked at solitaire.exe with dependency walker and it shows many calls to gdi32.dll and gdiplus.dll, but also a call to Direct3DCreate9 in d3d9.dll.

    Read the article

  • Navigation graphics overlayed over video

    - by Hrishikesh Choudhari
    Hey, Imagine I have a video playing.. Can I have some sort of motion graphics being played 'over' that video.. Like say the moving graphics is on an upper layer than the video, which would be the lower layer.. I am comfortable in a C++ and Python, so a solution that uses these two will be highly appreciated.. Thank you in advance, Rishi..

    Read the article

  • Ubuntu: Graphics freeze

    - by Phil
    We have recently updated a java application which runs on an Ubuntu PC, and are now experiencing a graphics problem that we didn't encounter before. The system is running constantly, and randomly maybe twice a month but sometimes within a few days the systems graphics will freeze, and the gnome panels are frozen. Here is an extract from the syslog; Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970021] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970177] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 937626 at 937625)

    Read the article

  • C# Graphics without Windows form

    - by teishu
    Hi, Could someone provide an example of drawing graphics without a windows form? I have an app that doesn't have a console window or windows form, but i need to draw some basic graphics (lines and rectangles etc.) Hope that makes sense. Thanks in advance.. J

    Read the article

  • Graphics cards so I can have 4 monitors

    - by oshirowanen
    I currently have a single old graphics card to which I have connected 2 monitors giving me a big desktop of 2560x1024. If I get 2 of the following graphics card: http://www.ebuyer.com/238428-gigabyte-gts-450-1gb-gddr5-dual-dvi-mini-hdmi-out-pci-e-graphics-gv-n450-1gi Will I be able to connect to monitors per graphics card, giving me a total resolution of 5120x1024? I guess what I'm asking is, will I simply be able to stick both graphics cards in, plug the monitors in and will it all just work out of the box? I currently have 4 dvi monitors which have a native resolution of 1280x1024 each.

    Read the article

  • Cast Graphics to Image in C#

    - by WebDevHobo
    I have a pictureBox on a Windows Form. I do the following to load a PNG file into it. Bitmap bm = (Bitmap)Image.FromFile("Image.PNG", true); Bitmap tmp; public Form1() { InitializeComponent(); this.tmp = new Bitmap(bm.Width, bm.Height); } private void pictureBox1_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImage(this.bm, new Rectangle(0, 0, tmp.Width, tmp.Height), 0, 0, tmp.Width, tmp.Height, GraphicsUnit.Pixel); } However, I need to draw things on the image and then have the result displayed again. Drawing rectangles can only be done via the Graphics class. I'd need to draw the needed rectangles on the image, make it an instance of the Image class again and save that to this.bm I can add a button that executes this.pictureBox1.Refresh();, forcing the pictureBox to be painted again, but I can't cast Graphics to Image. Because of that, I can't save the edits to the this.bm bitmap. That's my problem, and I see no way out.

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • nvidia graphics resolution problem

    - by Deepak Adhikari
    I am currently using ubuntu 12.04 I have acer aspire timelinex 3830tg with 2GB nvidia GeForce GT540M graphics card To enable my graphics card I followed following steps. 1.) I activated nvidia_current and nvidia_current_updates from additional drivers 2.) sudo nvidia-xconfig 3.) then reboot Following these steps I got following errors 1.) my resolution is 640x480...(there is no option of 1366x768 in display...previously there was 1366x768 when nvidia-xconfig command was not entered) 2.) when I open nvidia-settings it shows me following error "You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run 'nvidia-xconfig' as root) and restart the X server." Problem need to be solved 1.) Change resolution to 1366x768 2.) Also how to check my nvidia graphics working or not Please some one please help me to solve these issues...I am seriously in need of my graphics card... I wan't my nvidia graphics card work as my intel graphics smoothly I am not willing to use bumblebee with regards, ubuntu user

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How can I configure the Aiptek T-6000U graphics tablet?

    - by mejpark
    I followed AiptekTablet instructions on the Ubuntu Wiki to configure 11.04 for use with my graphics tablet. I installed the xserver-xorg-input-aiptek package and created two files with the options detailed on the Wiki page above: $ cat /lib/udev/rules.d/69-xserver-xorg-input-aiptek.rules ACTION!="add|change", GOTO="xorg_aiptek_end" KERNEL!="event[0-9]*", GOTO="xorg_aiptek_end" ATTRS{idVendor}=="08ca", ENV{x11_driver}="aiptek", SYMLINK+="input/aiptektablet" LABEL="xorg_aiptek_end" $ cat /usr/share/X11/xorg.conf.d/50-aiptek.conf Section "InputClass" Identifier "pen" MatchProduct "Aiptek|AIPTEK|aiptek" MatchDevicePath "/dev/input/event*" Driver "aiptek" Option "USB" "on" Option "Type" "stylus" Option "Mode" "absolute" Option "zMin" "0" Option "zMax" "511" EndSection The 50-aiptek.conf file was originally called 10-aiptek.conf as in the Wiki, but an Aiptek tablet installation help thread on the Ubuntu Forums, suggested changing 10 to 50. Any ideas? Thank you.

    Read the article

  • How do I install the Intel 82845 graphics driver -- videos are really slow?

    - by Mahesh Bhat
    I installed lubuntu on my machine and seems Intel 82845 graphics wasn't installed. Videos are showing frame by frame. Many says ubuntu kernel has built in support for it. but seems it is not. There is a website www.intellinuxgraphics.org has drivers for many kinds of linux distributions. But I find it difficult how to install them on my lubuntu. Can anyone elabroate on how that can be installed ? Output of the command dmesg http://paste.ubuntu.com/1058720/ Output of the command lsmod http://paste.ubuntu.com/1058724/

    Read the article

  • how to install drivers for intel corporation mobile gm965 gl960 integrated graphics controller?

    - by SanraS
    I have a "inspiron 1525"and am running 11.10 version of Ubuntu, I noticed that when scrolling I can see a lag and also when playing back HD content it will freeze the video for a few seconds and then resume, none of those happened on winVista when I had it installed, now I do prefer Ubuntu and would like to fix this rather than go back to windows. the chip is an "intel corporation mobile gm965 gl960 integrated graphics controller" as stated by "lspci" I don't know much about dealing with installing drivers that's why I'm asking for help and would like to be commands that I can put on terminal, rather than go here and then there and then look for this or that. I think I'll get lost in the middle but terminal I can follow. thanks for your help.

    Read the article

  • Graphics/Bitmap Limits?

    - by Dean
    Im having some weird problems with Graphics and Bitmap. I have a Graphics Object that is displayed on a PictureBox and im capturing the MouseMove and MouseClick Events that give X and Y Position of the Mouse on the Image but if the Y Position goes Bigger then 32775 it then goes into Negatives which means everything breaks. And if the Image is Bigger then 65535 it then stops displaying the Image. Any Ideas how these problems can be fixed? Thanks Example Code: http://pastebin.com/YEX0XD1q Just Click Make 10,000 Bigger about 4 times then scroll down and on the right it will show the mouse X and Y position and as you move down through the image and hover over the Red Area if you go down enough it will go into Negative Y.

    Read the article

  • Is there any graphics library in a higher level than OpenGL

    - by Turtle
    Hello, I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.

    Read the article

  • Ubuntu 14.04 doesn't detect my discrete GPU

    - by user258887
    I recently purchased a laptop with an Nvidia GeForce 860m, and have installed Ubuntu 14.04. On my old laptop I had 12.04, which automatically filled Additional Drivers with Nvidia drivers. But on this computer, the only thing in Additional Drivers is Qualcomm. So I manually installed Nvidia, but X Server Settings doesn't seem to detect any GPU... lspci | grep VGA reports only my integrated Intel GPU, but lspci -v reports many things, including the Nvidia GPU: 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2) Subsystem: ASUSTeK Computer Inc. Device 157d Flags: fast devsel, IRQ 16 Memory at ec000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at ed000000 [disabled] [size=512K] Capabilities: access denied Don't know what any of that means. Not sure if it's supposed to say 'access denied'... I need my GPU to do CUDA and OpenGL programming. What else can I do to figure out why this isn't working?

    Read the article

  • Is Graphics.DrawImage asynchronous?

    - by Roy
    Hi all, I was just wondering, is Graphics.DrawImage() asynchronous? I'm struggling with a thread safety issue and can't figure out where the problem is. if i use the following code in the GUI thread: protected override void OnPaint(PaintEventArgs e) { lock (_bitmapSyncRoot) { e.Graphics.DrawImage(_bitmap, _xPos, _yPos); } } And have the following code in a separate thread: private void RedrawBitmapThread() { Bitmap newBitmap = new Bitmap(_width, _height); // Draw bitmap // Bitmap oldBitmap = null; lock (_bitmapSyncRoot) { oldBitmap = _bitmap; _bitmap = newBitmap; } if (oldBitmap != null) { oldBitmap.Dispose(); } Invoke(Invalidate); } Could that explain an accessviolation exception? The code is running on a windows mobile 6.1 device with compact framework 3.5.

    Read the article

  • 2d java Graphics

    - by LukeG
    I am new to java 2d graphics and I have problem handling mouseclick event. Is it possible for you to tell me why there is nothing going on after updating mouse status to clicked ? What I want to do is to change the image in array at 0 2 to another image. Nothing happens tho. Thanks for your help in advance. import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.event.*; import java.awt.*; import javax.swing.ImageIcon; import javax.swing.*; public class Board extends JPanel implements MouseListener { private static boolean[] keyboardState = new boolean[525]; private static boolean[] mouseState = new boolean[3]; private static Image[][] images; Image house; int w = 0; int h = 0; int xPos; int yPos; ImageIcon ii = new ImageIcon(this.getClass().getResource("house.gif")); ImageIcon iii = new ImageIcon(this.getClass().getResource("house1.gif")); public Board() { house = ii.getImage(); h = house.getHeight(null); w = house.getWidth(null); images = new Image[10][10]; for(int i = 0; i < 10; i++) { for(int j = 0; j < 10; j++) { images[i][j] = house; } } } public void paint(Graphics g) { Graphics2D g2d = (Graphics2D) g; for(int i = 0; i < 10; i++) { for(int j = 0; j < 10; j++) { g2d.drawImage(images[i][j],w*i,h*j,null); } } //g2d.drawImage(house,15,15,null); } public void checkMouse() { if(mouseState[0]) { images[0][2] = iii.getImage(); repaint(); super.repaint(); } } @Override public void mousePressed(MouseEvent e) { mouseKeyStatus(e, true); checkMouse(); } @Override public void mouseReleased(MouseEvent e) { mouseKeyStatus(e, false); repaint(); } public static boolean mouseButtonState(int button) { return mouseState[button - 1]; } private void mouseKeyStatus(MouseEvent e, boolean status) { if(e.getButton() == MouseEvent.BUTTON1) mouseState[0] = status; else if(e.getButton() == MouseEvent.BUTTON2) mouseState[1] = status; else if(e.getButton() == MouseEvent.BUTTON3) mouseState[2] = status; }

    Read the article

  • Simple graphics import and manipulation

    - by gnuboi
    What do I need to do something like this? User builds a diagram and tags certain boxes and circles with text. User imports diagram into my program. My program processes the tagged boxes and circles and fills them with different colors. My program exports color-filled diagram. So the questions are... What graphics format supports tag-able boxes and circles? What existing application should I recommend users to use to produce diagrams in the above format? Would this graphics format be very easily readable in code to do things like reading tags and coloring in the associated boxes and circles? Free / open source libraries and apps are desired as this would be for academic use. Thank you!

    Read the article

  • Question about separating game core engine from game graphics engine...

    - by Conrad Clark
    Suppose I have a SquareObject class, which implements IDrawable, an interface which contains the method void Draw(). I want to separate drawing logic itself from the game core engine. My main idea is to create a static class which is responsible to dispatch actions to the graphic engine. public static class DrawDispatcher<T> { private static Action<T> DrawAction = new Action<T>((ObjectToDraw)=>{}); public static void SetDrawAction(Action<T> action) { DrawAction = action; } public static void Dispatch(this T Obj) { DrawAction(Obj); } } public static class Extensions { public static void DispatchDraw<T>(this object Obj) { DrawDispatcher<T>.DispatchDraw((T)Obj); } } Then, on the core side: public class SquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } And on the graphics side: public static class SquareRender{ //stuff here public static void Initialize(){ DrawDispatcher<SquareObject>.SetDrawAction((Square)=>{//my square rendering logic}); } } Do this "pattern" follow best practices? And a plus, I could easily change the render scheme of each object by changing the DispatchDraw parameter, as in: public class SuperSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } public class RedSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<RedSquareObject>(); } #endregion } RedSquareObject would have its own render method, but SuperSquareObject would render as a normal SquareObject I'm just asking because i do not want to reinvent the wheel, and there may be a design pattern similar (and better) to this that I may be not acknowledged of. Thanks in advance!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >