Search Results

Search found 4224 results on 169 pages for 'dual gpu'.

Page 84/169 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Internet Explorer 9 At MIX10

    Check out this great Internet Explorer 9 (IE9) video interview from this years MIX10 conference. John Hrvatin is a Lead Microsoft PM on the IE9 Project. Johns a smart guy who patiently answered all my questions about IE9. Thanks John! I highly recommend watching this video to see why IE9 sounds so exciting: In the video, John demos IE9 and openly discusses: IE9 Features and performance HTML5 support Gives a IE9 demo Explains new IE9 JavaScript engine (JIT, Multicore, GPU Powered)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Installed nvidia driver, activated it, and now Unity is gone. No bars, menus, nothing

    - by Noel
    I installed the nvidia driver (installed the ubuntu-x-swat ones, updated them, got the updates for them, installed bumblebee. I restarted everytime I did those steps, so no, i don't simply need to 'restart X'. I tried to run things using bumblebee, but bumblebee was like "can't access GPU driver". So I ran nvidia-settings, it said the drivers weren't in use, so I ran "sudo nvidia-xconfig", then restarted. Now, my login screen looks differently than it did before: it asks me if I want to load: "GNOME, GNOME - no effects, Cairo Dock - GNOME, System Default, or Ubuntu" when I log in, but WORST OF ALL: i no longer have any kind of GNOME/unity GUI. There are no title bars above any windows, no close/minimize/maximize buttons. The unity bar is gone, and will not show up when I call it. And the top status bar is also no longer there.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

  • Multiple volumetric lights

    - by notabene
    I recently read this GPU GEMS 3 article Volumetric Light Scattering as a Post-Process. I like the idea to add volumetric light property to realtime render i'm working on. Question is will it work for multiple lights? Our renderer uses one render pass per light and uses additive blending to sum incoming light. I'm mostly convinced that it have to work nice. Do you agree? Maybe there can be problem where light rays crosses each other.

    Read the article

  • Video capture Performance

    - by volting
    I have noticed high CPU utilization in a number of applications (except mplayer) which read from the embedded webcam on my laptop. Bizarrely CPU utilization varies proportionately to the level of illumination present. I know that that high CPU usage has nothing to do with rendering the video, as I have written a simple app using the OpenCV library to simply grab frames from the webcam, and cpu usage is still high. I think that mplayer might be using my GPU (and the other apps aren't), but since its not an issue with rendering, I dont think this explains anything. Cheese Low light --- ~12% CPU Bright Light ---- ~63% CPU Camorama Low light --- ~7% CPU Bright Light ---- ~30% CPU Opencv C++ library, (display in a single highgui window) Low light --- ~13% CPU Bright Light ---- ~40% CPU (same test on windows 7, 4-9%) Mplayer No problem, 1-2% regardless of light levels Note: If all I want't to do is capture a feed from my webcam I would use mplayer and forget about it, but I'm developing an application which uses the OpenCV to capture a video feed among other things, performance is important.

    Read the article

  • Timing Calculations for Opengl ES 2.0 draw calls

    - by Arun AC
    I am drawing a cube in OpenGL ES 2.0 in Linux. I am calculating the time taken for each frame using below function #define NANO 1000000000 #define NANO_TO_MICRO(x) ((x)/1000) uint64_t getTick() { struct timespec stCT; clock_gettime(CLOCK_MONOTONIC, &stCT); uint64_t iCurrTimeNano = (1000000000 * stCT.tv_sec + stCT.tv_nsec); // in Nano Secs uint64_t iCurrTimeMicro = NANO_TO_MICRO(iCurrTimeNano); // in Micro Secs return iCurrTimeMicro; } I am running my code for 100 frames with simple x-axis rotation. I am getting around 200 to 220 microsecs per frame. that means am i getting around (1/220microsec = 4545) FPS Is my GPU is that fast? I strongly doubt this result. what went wrong in the code? Regards, Arun AC

    Read the article

  • Unity Desktop Displays strange lines

    - by Alex Holsgrove
    Didn't quite know what title to give this problem, but hopefully the screenshot will explain more. I am running a Samsung R60+ laptop on Ubuntu 13.10 with a Radeon X1250 GPU. After I login and the Unity desktop shows, I can see these strange lines at the top of the screen. I presumed it was perhaps a driver issue and found this article to see if I could resolve the issue: https://help.ubuntu.com/community/RadeonDriver I cannot get on with Unity at all (where are all of the menus gone!) so perhaps reverting back to Gnome may be a solution in my case? I'd welcome any ideas please.

    Read the article

  • Problems after installing Ubuntu 11.10

    - by Andrew Orr
    I'm having trouble with Ubuntu 11.10. It has to do with nomodeset. After I boot into Ubuntu, it goes to a purple screen for about 10 seconds and then goes blank. After that nothing happens. I've read other people's questions about this and I know it has to do with enabling nomodeset. This worked for me when I was using the LiveCD mode, but now Ubuntu is permanently installed as a dual-boot system. Going into recovery mode doesn't work, pressing "e" in the boot loader and writing nomodeset after quiet splash doesn't work either. Holding shift any time it's booting doesn't work. I don't know what to do anymore. I have an HP Pavilion dv6 laptop with an AMD A6-3400M CPU, and my GPU is an AMD Radeon HD 6520G. I've never worked with Linux before so taking me through this step-by-step would be great. Thanks!

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Disable ATI Radeon graphics card and use intel graphics (switcheroo unavailable)

    - by user92356
    So I have a HP Envy with ATI Radeon 5450 + intel switchable graphics. I think (though I'm not sure) that the Radeon is running right now on Ubuntu 12.04 (because my laptop is making too much noise when I'm doing something non gpu intensive like word processing, or web browsing). So what I want to do is disable the ATI Radeon and use the Intel instead. I looked around and it seems all the solutions use switcheroo, but I dont have it on my computer! I think this happened because I tried installing the proprietary driver (fglrx). Any and all help is 200% appreciated, thank you

    Read the article

  • can't see myself in Skype video call

    - by seb
    I'm running 12.04 and I've installed skype via the software centre. As with 11.10 everything works fine with 12.04 as well. There is only one thing that does not work. I can't see myself in Skype video calls. The video call works fine, I can see the other side the other side can see me. Buid in microphone works. If I click on 'show myself' during the video call nothing happens. I know that it works on Ubuntu in general as I had it working a while back on a different machine (Xubuntu 11.04). Could that be related to the GPU? I'm now on a intel/nvidia one. Any Ideas where I can hunt for some options or tweaking?

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • CUDA instructions ask to stop GDM but it doesn't exist

    - by Gabs
    I am trying to install and run some CUDA exemples in Ubuntu 12.04. First of all, I downloaded all .run files from http://developer.nvidia.com/cuda-downloads, then followed the instructions at http://developer.nvidia.com/nvidia-gpu-computing-, until I got hung up on the first step: Exit the GUI if you are in a GUI environment by pressing Ctrl-Alt-Backspace. Some distributions require you to press this sequence twice in a row; others have disabled it altogether in favor of a command such as sudo /etc/init.d/gdm stop . Still others require changing the system runlevel using a command such as /sbin/init 3 to exit the GUI. When I type the command sudo /etc/init.d/gdmstop, it returns: gdm command not found Can anybody help me exit my GUI in order to continue? Thank you in advance.

    Read the article

  • Brightness not working; HP Pavilion Dv6; ATI Radeon HD6770M

    - by Yogesh Dhamija
    I am new to Ubuntu, but so far I am loving it. I was always unable to change my brightness since I installed Ubuntu, but I figured that installing the latest ATI driver for my graphics card would work. I did, but I still can't change the brightness. The slider goes up and down, but the brightness stays the same (on full). I have switchable graphics, an ATI Radeon HD 6770M, and an Intel integrated GPU. Since I am new to Linux, I am not familiar with terminal, so you will have to spell everything out for me, including if you need more information and how to get it. Thanks.

    Read the article

  • Ubuntu 14.04LTS - runtime video card configuration through Radeon driver

    - by RJVB
    How does one configure Radeon video cards when using the open source Radeon driver - power profile, vsync, etc? Why I try the widely documented solution (against overheating) that worked for me under LMDE (confirmed with kernels up to 3.12.6), I get the following error: $ sudo cat /sys/class/drm/card0/device/power_profile default $ sudo sh -c "echo mid > /sys/class/drm/card0/device/power_profile" sh: echo: I/O error Exit 1 And when I try suggestions from Arch's ATI wiki my modifications are simply ignored: $ sudo cat /sys/class/drm/card0/device/power_dpm_force_performance_level auto $ sudo sh -c "echo high> /sys/class/drm/card0/device/power_dpm_force_performance_level" $ sudo cat /sys/class/drm/card0/device/power_dpm_force_performance_level auto Is this something Ubuntu specific, or something introduced with the 3.13 version of the Radeon driver? I'm encountering this on 2 laptops, one with a Radeon HD6290 (integrated GPU), the other with a discrete RV710 card. The RV710 needs a specific power setting to prevent overheating under LMDE, fortunately it doesn't seem to overheat with the Ubuntu default setting.

    Read the article

  • ACER Aspire V5-171 compatibility

    - by JamerTheProgrammer
    Im thinking about buying a V5-171 with an i3 in it. Im worried about secure boot though. I heard some people cant turn it off and it wont work... Im not shy to open it up and replace the hard drive with Ubuntu preinstalled. Im also worried about the wifi working. I have heard its been dropping out for people quite a bit along with also the trackpad not working. I dont mind replacing the wifi stick (if its even possible?) inside. Is the GPU (HD4000 i think) supported in ubuntu with full video accel? Thanks!

    Read the article

  • Getting vga_switcheroo with ATI Mobility Radeon 5650 HD to work

    - by stevejb
    Hello! I have a new HP dv7 laptop with ATI Mobility Radeon HD 5650 graphics card, and also Intel graphics (switchable). I have done the following and want to understand what is going on with my graphics driver Resized windows 7 and did fresh install of 10.10 Booted into 10.10 and things seemed to be working okay Enabled ATI graphics, and was clearly working on the ATI rather than Intel GPU (desktop cube worked) Rebooted, got an error that modprobe could not load modules.dep, and also something about i915 symbols Rebooted into recovery mode, modified xorg.conf to remove the mention of fglrx Rebooted, and the errors show, but then x starts but clearly in intel graphics I would ideally like to be able to switch between the ATI and Intel graphics, a la vga_switcheroo. My first problem seems to be that the folder /sys/kernel/debug/vgaswitcheroo does not exist, hinting at some kind of kernel issue. What can I do to get this available? Thanks!

    Read the article

  • Black screen after upgrading from 13.04 to 13.10

    - by Harri
    Just upgraded from 13.04 to 13.10 and all I got was a black screen. The hardware I'm running is Asus Zenbook UX31A (Intel GPU). I do hear that the login screen drums do play, so the system does boot to login screen. When I try to boot using kernel 3.11.0-12 recovery mode, it tells me "initctl: event failed". Then if I go on an press ctrl+alt+f2, log in and command startx, it dies because "Fatal server error: no screens found". Here are some logs from /var/log/Xorg.0.log http://pastebin.com/ZQasUKJx Kernel 3.8.0-31 work ok, as did things before the upgrade.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • OpenGL behaviour depending on the graphics card?

    - by Dan
    This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL/GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..)

    Read the article

  • My ASUS U32U with fresh Xubuntu install shows a black screen 50-80% of the startups

    - by Jona Ekenberg
    I have recently installed Ubuntu 12.10 with Xubuntu-package on my ASUS U32U notebook (Radeon HD 6320 GPU). The issue I have is that more often than not, after the GRUB-select screen I get a black screen, and three times total white lines (kind of) flashes very quickly (with maybe 5 seconds between each flash). I'm not even able to get to the login-screen (nor the Xubuntu loading screen). At first I thought it was simply me having installed something dumb or messed up some settings, but even after reformatting the partition and installing ubuntu again, the problem remains. Before I formatted it xfce4's window manager wouldn't start either, but it does now (when I am able to see anything). I can access the virual consoles (ctrl+alt+f1), but I can't see anything, but I've managed to shutdown the computer by using it (sudo shutdown -h now).

    Read the article

  • Frequent GUI pauses in Ubuntu 13.04 / Unity / Intel HD4000

    - by Simon
    I'm experiencing very frequent (and regular) GUI pauses on my system. Every 30 seconds (pretty much exactly) the GUI will freeze for maybe .25 to .5 seconds. The mouse stops moving, keys stop echoing and a stopwatch timer briefly pauses. I'm using the Intel Graphics driver available from: https://download.01.org/gfx/ubuntu/13.04/main I've looked in a few places and tried a few things for a solution: I've checked cron and anacron for scheduled processes. I've disabled background processes (eg mysql, postgres, apache) not that these were doing anything anyway I've checked the following posts and tried the suggestions there: Unity GUI pauses/freezes for less than a few seconds How to go about troubleshooting frequent system pauses I've watched the system using top and System Monitor and there are no spikes (or even blips) of cpu usage when the pauses occur. There are no obvious error messages in dmesg or syslog There is loads of free RAM (8GB+) and no swap usage If it helps it's a ZooStorm i5 laptop with a HD4000 GPU, 16GB Ram and an SSD. Any help / suggestions would be very gratefully received.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >