Search Results

Search found 29396 results on 1176 pages for 'multiple graphics devices'.

Page 17/1176 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Multiple domains for different products?

    - by alexandertr
    I have a website with software applications. Is it good for SEO to choose one keyword rich domain name for each of our software products or should we stick to a single domain? From a user's perspective I think it would be easier to remember a domain that is keyword rich as the user will instantly know what this product is for. But I have read articles that the latest trend in SEO is to stick to one domain for all of your products and invest on this single domain website. Is that true? What do you advise? Should I register a separate domain for each of our products or should I use only one single domain? Should I do a 301 redirect with a .htaccess to a single domain? And what about the sitemaps? Should I register all sites in Google Webmaster Tools and post a separate sitemap for each one of them? should my main site sitemap include all pages or should separate domains have their own sitemaps?

    Read the article

  • Setting up multiple monitors KDE 12.04

    - by Brandon
    I have 1 1920x1080 display that I am using as a primary display, with a 1600x900 display off to my side. I have tried to set up the smaller display to be positioned to the right of the larger display, but I can't. The only option that works is to use it as a clone. When I connect the smaller monitor to another DVI port on my AMD Radeon HD 6950, it doesnt work at all. I can provide more information if needed. Thank you!

    Read the article

  • How do install graphics drivers for radeon HD 6380g on a HP pavilion g6?

    - by Ryan
    I installed ubuntu from wubi using a live cd but when i booted i just got a black screen so i followed the instructions from this forum http://ubuntuforums.org/showpost.php?p=10089820&postcount=8 which successfully completed the ubuntu installation but now when i boot ubuntu i just get a command prompt. when i type in startx it comes up with an error. I am told i need to install graphics drivers, how do i do this? Thanks in advance Ryan

    Read the article

  • Graphics library used by Windows Vista Freecell and Solitaire

    - by David Grayson
    Does anyone know what graphics library is used to create the graphics in the Solitaire and Freecell games included with Windows Vista (e.g. XNA, GDI, WPF)? A good answer would include the name of the library and evidence. I looked at solitaire.exe with dependency walker and it shows many calls to gdi32.dll and gdiplus.dll, but also a call to Direct3DCreate9 in d3d9.dll.

    Read the article

  • Navigation graphics overlayed over video

    - by Hrishikesh Choudhari
    Hey, Imagine I have a video playing.. Can I have some sort of motion graphics being played 'over' that video.. Like say the moving graphics is on an upper layer than the video, which would be the lower layer.. I am comfortable in a C++ and Python, so a solution that uses these two will be highly appreciated.. Thank you in advance, Rishi..

    Read the article

  • Ubuntu: Graphics freeze

    - by Phil
    We have recently updated a java application which runs on an Ubuntu PC, and are now experiencing a graphics problem that we didn't encounter before. The system is running constantly, and randomly maybe twice a month but sometimes within a few days the systems graphics will freeze, and the gnome panels are frozen. Here is an extract from the syslog; Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970021] [drm:i915_hangcheck_elapsed] ERROR Hangcheck timer elapsed... GPU hung Jun 28 05:41:53 swimtag-NM10 kernel: [34802.970177] [drm:i915_do_wait_request] ERROR i915_do_wait_request returns -5 (awaiting 937626 at 937625)

    Read the article

  • C# Graphics without Windows form

    - by teishu
    Hi, Could someone provide an example of drawing graphics without a windows form? I have an app that doesn't have a console window or windows form, but i need to draw some basic graphics (lines and rectangles etc.) Hope that makes sense. Thanks in advance.. J

    Read the article

  • Game testing on Android - emulator or real devices?

    - by n00bfuscator
    I am working at a localization agency and we have been approached by a client about testing their games on iOS as well as Android. Testing on iOS seems fairly easy as we can just buy a couple of devices and we should be covered. For Android it seems to be completely different. From what i found, the emulator can cover all API levels, screen sizes and such, but i hear it's buggy and nothing could replace testing on real devices. With the vast amount of Android devices out there and the rate at which new devices are released it seems impossible to keep up. How can i test games (localization and functional) on Android covering all compatible devices?

    Read the article

  • On a dual-GPU laptop, is using the discrete GPU ever more power efficient?

    - by Mahmoud Al-Qudsi
    Given a laptop with a dual integrated/discrete GPU configuration, is it ever more power efficient to use the discrete GPU instead of the integrated? Obviously when writing an email or working on a spreadsheet, the integrated GPU will always use less power. But let's say you're doing something graphics-medium but not graphics-intensive/heavy - is there a point where it actually makes sense to fire up the discrete GPU, not for performance but for power-saving reasons? Off the top of my head, I can think of a scenario where the external GPU supports hardware decoding of a particular video codec - I'd imagine there is a "price point" where using the GPU saves more energy than decoding that fully in software would. But I think most GPUs, integrated or discrete, pretty much decode just the plain-Jane h264. But maybe there is something more complicated, perhaps if you're doing something like desktop/windowing animations or a flash animation on a website (not an embedded flash video) - maybe the discrete GPU will use enough less power to make up for switching to it? I guess this question can be summed up as to whether or not you can say beyond doubt that if you don't care for performance on a laptop with two GPUs, always use the integrated GPU for maximum battery life.

    Read the article

  • ATI firepro will not detect a second DVI-D monitor

    - by John
    OK so weird issue here. I have previously been running 6 screens off of 3 of the older ATI firepro graphics cards but they had a problem with the heat sink getting too hot and warping the PCB resulting in total failure of the card, to replace my three dead cards I purchased a new-type ATI firepro with the newer heat sink design. I'm only using one at the moment to make sure they've fixed the problem before I waste more money on 2 more cards but this is where things start to get weird. The Firepro's only have one port on them, they connect to two monitors via a splitter cable going from the one port to two DVI connectors for the screens. When I plug two identical monitors in via their DVI inputs not matter what I do windows and Catalyst will only detect one screen. However if I use the VGA input on one of the screens with a VGA - DVI adaptor to plug it in to the card it works fine. This confuses me greatly. I'm currently using the ATI Firepro 2270 Graphics card with identical DELL U2311H screens. I can post the rest of the system spec as well if needed but I wouldn't have thought it would make much difference as it had no problem handling 6 screens before the graphics cards failed. Naturally both catalyst and ATI drivers are the most current version. ATI tech support has been absolutely zero help, they seemed to get stumped as soon as I verified that both screens were plugged in and connected properly. Anyone have any ideas?

    Read the article

  • One monitor getting spilled over into other monitor: how to do a 100% reset of gnome graphics configuration

    - by Paul Nathan
    I had to kill a VMWare process and afterwards, my monitor's configuration is buggy. I have 2 monitors in a side-by-side configuration. My right-hand monitor is the secondary monitor. Upon its right-hand side there are about 50 pixels showing from the left side of the lefthand monitor (ie, as if it was wrapped around). Further, my mouse clicks are registering as about 50 pixels sideways from where they should be. It's as if those 50 pixels between monitors got gobbled. What have I done? I've reset the screen configuration in multiple ways, using xrandr, multiple monitors app, etc. This persists in different side-by-side configurations, and also persists with another user. It does not occur with XFCE. Resetting the Window manager with the Compiz reset WM app does not fix this. I've concluded the burn-to-the-ground approach is likely the best, and would like to do a 100% reset of my graphics settings. It's an Intel integrated chipset. Removing ~/.config/monitors.xml did not work. Also, interestingly, the mouse can mouse-over the 50 errant pixels on the rhs of the right-hand monitor. I hypothesize that it's a compositing problem occurring at the layer where the background, selection, and clicks are caught. Also, inverting the right-hand monitor removes the issue, but renders the screen unusable. Even more datapoints: This happens in KDE as well Sometimes logging into Gnome and running xrandr --output DVI1 --auto resets it, but the issue immediately reappears when I press alt-tab. With Compiz Application Switch turned on, the workspace is 'pushed back' a bit, and the slice on the RHS follows it as well. I'm wondering if it's a flaw in the compiz workspace compositing configuration. I suspect the error was in the compositing configuration. I installed 11.10.

    Read the article

  • Cast Graphics to Image in C#

    - by WebDevHobo
    I have a pictureBox on a Windows Form. I do the following to load a PNG file into it. Bitmap bm = (Bitmap)Image.FromFile("Image.PNG", true); Bitmap tmp; public Form1() { InitializeComponent(); this.tmp = new Bitmap(bm.Width, bm.Height); } private void pictureBox1_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImage(this.bm, new Rectangle(0, 0, tmp.Width, tmp.Height), 0, 0, tmp.Width, tmp.Height, GraphicsUnit.Pixel); } However, I need to draw things on the image and then have the result displayed again. Drawing rectangles can only be done via the Graphics class. I'd need to draw the needed rectangles on the image, make it an instance of the Image class again and save that to this.bm I can add a button that executes this.pictureBox1.Refresh();, forcing the pictureBox to be painted again, but I can't cast Graphics to Image. Because of that, I can't save the edits to the this.bm bitmap. That's my problem, and I see no way out.

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • Network and Storage Devices Throughput Chart

    - by zroiy
    With all of the different storage and network devices that surround our day to day life, understanding these devices data transfer speeds can be somewhat confusing. Think about trying to identify your weakest link in the a chain that starts with an external USB hard drive (or a flash drive) that's connected to a 802.11g wifi router, can you quickly come up with an answer of where's the bottle neck in that chain , is it the router or the storage devices ? . Well, the following chart should give you an idea understanding different devices, protocols and interfaces maximum throughput speeds. Though these numbers can fluctuate (mostly for worse, but sometimes for the better) due to different kind of factors such as OS overhead (or caching and optimization) , multiple users or processes and so on , the chart can still serve to provide basic information on the theoretical throughput different devices and protocols can get to.. Enjoy.  Link to the full size chart   References:http://en.wikipedia.org/wiki/Sata#SATA_revision_1.0_.28SATA_1.5_Gbit.2Fs.29http://en.wikipedia.org/wiki/Usbhttp://en.wikipedia.org/wiki/Usb_3http://en.wikipedia.org/wiki/802.11http://mashable.com/2011/09/21/fastest-download-speeds-infographic/http://en.wikipedia.org/wiki/Thunderbolt_(interface)http://www.computerworld.com/s/article/9220434/Thunderbolt_vs._SuperSpeed_USB_3.0  Icons:http://openiconlibrary.sourceforge.net/gallery2/?./Icons/devices/drive-harddisk-3.png      

    Read the article

  • nvidia graphics resolution problem

    - by Deepak Adhikari
    I am currently using ubuntu 12.04 I have acer aspire timelinex 3830tg with 2GB nvidia GeForce GT540M graphics card To enable my graphics card I followed following steps. 1.) I activated nvidia_current and nvidia_current_updates from additional drivers 2.) sudo nvidia-xconfig 3.) then reboot Following these steps I got following errors 1.) my resolution is 640x480...(there is no option of 1366x768 in display...previously there was 1366x768 when nvidia-xconfig command was not entered) 2.) when I open nvidia-settings it shows me following error "You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run 'nvidia-xconfig' as root) and restart the X server." Problem need to be solved 1.) Change resolution to 1366x768 2.) Also how to check my nvidia graphics working or not Please some one please help me to solve these issues...I am seriously in need of my graphics card... I wan't my nvidia graphics card work as my intel graphics smoothly I am not willing to use bumblebee with regards, ubuntu user

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How do I install the Intel 82845 graphics driver -- videos are really slow?

    - by Mahesh Bhat
    I installed lubuntu on my machine and seems Intel 82845 graphics wasn't installed. Videos are showing frame by frame. Many says ubuntu kernel has built in support for it. but seems it is not. There is a website www.intellinuxgraphics.org has drivers for many kinds of linux distributions. But I find it difficult how to install them on my lubuntu. Can anyone elabroate on how that can be installed ? Output of the command dmesg http://paste.ubuntu.com/1058720/ Output of the command lsmod http://paste.ubuntu.com/1058724/

    Read the article

  • how to install drivers for intel corporation mobile gm965 gl960 integrated graphics controller?

    - by SanraS
    I have a "inspiron 1525"and am running 11.10 version of Ubuntu, I noticed that when scrolling I can see a lag and also when playing back HD content it will freeze the video for a few seconds and then resume, none of those happened on winVista when I had it installed, now I do prefer Ubuntu and would like to fix this rather than go back to windows. the chip is an "intel corporation mobile gm965 gl960 integrated graphics controller" as stated by "lspci" I don't know much about dealing with installing drivers that's why I'm asking for help and would like to be commands that I can put on terminal, rather than go here and then there and then look for this or that. I think I'll get lost in the middle but terminal I can follow. thanks for your help.

    Read the article

  • 50 Billion Served: Java Embedded on Devices

    - by Tori Wieldt
    It doesn't matter if it is 50 billion or 24 billion, just suffice it to stay that there will be MANY connected devices in the year 2020. With just 24 billion devices, they will outnumber humans six to one! So as a developer, you don't want to ignore this opportunity. What if you could use your Java skills and deploy an app to a fraction of these devices (don't be greedy, how about just, say, 118,000 of them)? Fareed Suliman, Java ME Product Manager had lots of good news for Java Developers in his presentation Modernizing the Explosion of Advanced Microcontrollers with Embedded Java at ARM TechCon in Santa Clara, CA last week. "A radical architecture shift is underway in this space, from proprietary to standards-based," he explained.  He pointed out several advantages to using Embedded Java for devices: Java is a proven and open standard. Java provides connectivity, encryption, location, and web services APIs. You don't have to focus on and keep reinventing the plumbing below the JVM. Abstracting the software from the hardware allows you to repeat your app across many devices. Abstracting the software from the hardware allows allows parallel development so you can get your app done more quickly. You already know Java (or you can hire lots of Java talent). Java is a full ecosystem, with Java Embedded plugins for IDEs like Eclipse and NetBeans. Java ME allows for in-field software upgrades. Suliman mentioned two ways developers can start using Java Embedded today:  Java ME Embedded Suite 7.0 Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to embedded devices such as gateways and concentrators. Oracle Java ME Embedded 3.2 Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. Think tiny. Really tiny. And think big.  Read more about Java Embedded at the Oracle Technology Network, and read The Java Source blog Java Embedded Releases from September.

    Read the article

  • LINQ Query using Multiple From and Multiple Collections

    1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:  6: namespace ConsoleApplication2 7: { 8: class Program 9: { 10: static void Main(string[] args) 11: { 12: var emps = GetEmployees(); 13: var deps = GetDepartments(); 14:  15: var results = from e in emps 16: from d in deps 17: where e.EmpNo >= 1 && d.DeptNo <= 30 18: select new { Emp = e, Dept = d }; 19: 20: foreach (var item in results) 21: { 22: Console.WriteLine("{0},{1},{2},{3}", item.Dept.DeptNo, item.Dept.DName, item.Emp.EmpNo, item.Emp.EmpName); 23: } 24: } 25:  26: private static List<Emp> GetEmployees() 27: { 28: return new List<Emp>() { 29: new Emp() { EmpNo = 1, EmpName = "Smith", DeptNo = 10 }, 30: new Emp() { EmpNo = 2, EmpName = "Narayan", DeptNo = 20 }, 31: new Emp() { EmpNo = 3, EmpName = "Rishi", DeptNo = 30 }, 32: new Emp() { EmpNo = 4, EmpName = "Guru", DeptNo = 10 }, 33: new Emp() { EmpNo = 5, EmpName = "Priya", DeptNo = 20 }, 34: new Emp() { EmpNo = 6, EmpName = "Riya", DeptNo = 10 } 35: }; 36: } 37:  38: private static List<Department> GetDepartments() 39: { 40: return new List<Department>() { 41: new Department() { DeptNo=10, DName="Accounts" }, 42: new Department() { DeptNo=20, DName="Finance" }, 43: new Department() { DeptNo=30, DName="Travel" } 44: }; 45: } 46: } 47:  48: class Emp 49: { 50: public int EmpNo { get; set; } 51: public string EmpName { get; set; } 52: public int DeptNo { get; set; } 53: } 54:  55: class Department 56: { 57: public int DeptNo { get; set; } 58: public String DName { get; set; } 59: } 60: } span.fullpost {display:none;}

    Read the article

  • Graphics/Bitmap Limits?

    - by Dean
    Im having some weird problems with Graphics and Bitmap. I have a Graphics Object that is displayed on a PictureBox and im capturing the MouseMove and MouseClick Events that give X and Y Position of the Mouse on the Image but if the Y Position goes Bigger then 32775 it then goes into Negatives which means everything breaks. And if the Image is Bigger then 65535 it then stops displaying the Image. Any Ideas how these problems can be fixed? Thanks Example Code: http://pastebin.com/YEX0XD1q Just Click Make 10,000 Bigger about 4 times then scroll down and on the right it will show the mouse X and Y position and as you move down through the image and hover over the Red Area if you go down enough it will go into Negative Y.

    Read the article

  • Is there any graphics library in a higher level than OpenGL

    - by Turtle
    Hello, I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.

    Read the article

  • Dell Latitude E6420 dual-boot Ubuntu + Windows 7 Optimus graphics problems

    - by Ryan
    I have a Dell Latitude E6420 laptop with Ubuntu 12.04 alongside Windows 7 (dual-boot) docked in a docking station with 2 DVI outputs. It took me a week of tinkering to get the dual external monitors to work in Ubuntu, and I had to disabled the "Optimus" feature in the BIOS. But now neither external monitor is detected in Windows, and the resolution is also very low. Do you know how I can successfully dual-boot Windows 7 and Ubuntu on this machine using my 2 external DVI monitors? I have an open question here too, trying to resolve this same issue: http://askubuntu.com/questions/146933/dock-with-dual-external-dvi-monitors-with-intel-nvidia-optimus

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >