Search Results

Search found 955 results on 39 pages for 'gpu accelration'.

Page 20/39 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • NVIDIA Geforce gt 555m driver install problem

    - by mir
    I recently installed ubuntu 12.04 on my asus N55SF, which runs nvidia geforce gt 555m and an integrated graphics, via optimus but after installation i found no such thing on ubuntu. i went o nvidia website and installed the driver from there(ver 259.59 i guess) after reboot, i was stuck at 640x480 resolution and my card was still undetected i reinstalled ubuntu but this time i installed bumblebee, but the system settings still shows the graphics as 'unknown' i dont know whether the driver has been installed or not, but from the 'unknown' i figure its not there. when i run sudo nvidia-settings i get this msg "You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server." however i can run the settings page by sudo nvidia-settins -c :8 i just want to know why ubuntu hasnt still detected the gpu? is it because bumblebee disables it? and i forgot one v important thing..I AM A NOOB :(

    Read the article

  • Bumblebee - Poor performance with games

    - by user106880
    I have an Alienware M14x (GeForce 650m with Optimus) and got Bumblebee working correctly (trying to get ready for steam for linux :)). Also I'm running Ubuntu 12.10 with Unity. I get great framerates with glxgears and sauerbraten (a game on the repositories), but when I run games like Bastion and Splice, I get pretty terrible framerate, in fact worse than my integrated intel gpu. I have nvidia-current installed under ubuntu-x-swat/x-updates so it's driver version 310.xx. Also I forgot to add that I haven't been able to test other drivers because they seem to break glx (x can't load the glx module). I've been troubleshooting this for days now, and I'm very nearly out of ideas. Any hints on why only some games are running so poorly with bumblebee?

    Read the article

  • chromium-browser --proxy-server debugging

    - by user3678068
    Many places online have pointed out to configure chromium proxy via command can be achieve with the following line chromium-browser --proxy-server=[username]:[password]@[host]:[port] but I got this result on every request. Here's the output in the command line right after executing the previous command. (They do not appear to be relevant. There are no new command line output when I try to visit a page) libGL error: failed to authenticate magic 30 libGL error: failed to load driver: vboxvideo ATTENTION: default value of option force_s3tc_enable overridden by environment. [29551:29551:0606/160459:ERROR:sandbox_linux.cc(268)] InitializeSandbox() called with multiple threads in process gpu-process I have double checked that the proxy credential works with the foxyproxy chrome plugin. What else can I try to figure this out? [Edit] Going to chrome://net-internals/#proxy and reading "Effective proxy settings" if I do chromium-browser with no flags, I get Use DIRECT connections. Source: GSETTINGS if chromium-browser --proxy-server=[host]:[port], I get a message box requesting to login, and under "Effective proxy settings": Proxy server: [host]:[port] if chromium-browser --proxy-server=[user]:[pass]@[host]:[port], "Effective proxy settings" shows: Use DIRECT connections

    Read the article

  • "Unable to create OpenGL 3.3 context (flags 0, profile 1)"

    - by Tsvetan
    Trying to run any of the well-known McKesson's tutorials on a friend's laptop results in the aforementioned exception. I read that in order to run applications which use OpenGL 3.3 you must at least have an ATI HD or Nvidia 8xxx GPU series. He has an ATI HD class graphics processor which eliminates (maybe) this issue. Also, I read that this error may result in having old drivers. He updated his drivers but that didn't solve the problem. The tutorials are built as said in the book introduction and glsdk is installed. If more information is needed, say so and I will provide it. What are the other reasons for this kind of exception? And how can I fix them?

    Read the article

  • Using a DisplayLink USB video adapter on Ubuntu 12.10

    - by Jason R
    The line of USB video adapters made by DisplayLink has a somewhat sordid history under Linux. In past Ubuntu releases, the process of getting them to work has been somewhat difficult, inspiring a number of past questions on this site: example 1 example 2 example 3 However, there are some indications that version 3.5 of the Linux kernel (which is used by 12.10) contains better support for these adapters, which should make them easier to use. I currently have a single-GPU machine (it is an Nvidia adapter) with dual monitor outputs. I would like to add the DisplayLink adapter to drive a third external monitor. How can I set this up on Ubuntu 12.10?

    Read the article

  • gstreamer vaapi problem

    - by squallbayu
    I installed gstreamer-vaapi from this PPA : ppa:guido-iodice/video sudo apt-get install gstreamer0.10-vaapi libgstvaapi-x11-0 libgstvaapi0 but, if I run totem movie player (via terminal) it's show this error : (totem:3383): GLib-GObject-WARNING **: g_object_set_valist: object class 'TotemScrsaver' has no property named 'reason' (totem:3383): GLib-GObject-WARNING **: value "10752000" of type 'guint' is invalid or out of range for property 'connection-speed' of type 'guint' libva: libva version 0.31.0 Xlib: extension "XFree86-DRI" missing on display ":0.0". libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/dri/nvidia_drv_video.so libva error: /usr/lib/dri/nvidia_drv_video.so init failed libva: va_openDriver() returns -1 Segmentation fault It's seems I get wrong nvidia_drv_video.so. What should I do? If I uninstall it, it's work fine, but I want to use this vaapi backend for my video decoding via GPU while I run gstreamer based apps. PS : I use Ubuntu Lucid 64bit and MSI CR 400 Notebook : Intel Core 2 Duo Nvidia 8200M

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • Bumblebee : Module bbswitch could not be loaded

    - by StepTNT
    I've upgraded to 12.04 and I had to switch from Ironhide to the latest version of Bumblebee. Now, when I try to run bumblebeed, I get this error: FATAL: Module bbswitch not found. [ERROR]Module bbswitch could not be loaded (timeout?) [WARN]No switching method available. The dedicated card will always be on. I don't really need to use the secondary VGA on Kubuntu, so I would like to find a way to definitely shut the discrete GPU down and avoid wasting battery. I can't disable it from the BIOS because I use it on Windows. My card is an nvidia 540M.

    Read the article

  • How do I make Powertop changes permanent?

    - by arno
    I'm on a Compaq 615 and it's fan is loud as hell. Not much you can do about that but I'm trying to keep the CPU/GPU as cool as possible. This is what Powertop has to say: If I change all of them to "good", the changes don't survive a reboot. Also, upon exiting Powertop I get this: Loaded 8 prior measurements Cannot load from file /var/cache/powertop/saved_parameters.powertop Leaving PowerTOP I added the line to the "grub"-file as suggested here Upon closing gedit I get this: (gedit:2728): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Datei »/root/.local/share/recently-used.xbel.9CIMAW« konnte nicht angelegt werden: Datei oder Verzeichnis nicht gefunden The part in German says: Couldn't be created: File or directory not found.

    Read the article

  • Internet Explorer 9 At MIX10

    Check out this great Internet Explorer 9 (IE9) video interview from this years MIX10 conference. John Hrvatin is a Lead Microsoft PM on the IE9 Project. Johns a smart guy who patiently answered all my questions about IE9. Thanks John! I highly recommend watching this video to see why IE9 sounds so exciting: In the video, John demos IE9 and openly discusses: IE9 Features and performance HTML5 support Gives a IE9 demo Explains new IE9 JavaScript engine (JIT, Multicore, GPU Powered)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Installed nvidia driver, activated it, and now Unity is gone. No bars, menus, nothing

    - by Noel
    I installed the nvidia driver (installed the ubuntu-x-swat ones, updated them, got the updates for them, installed bumblebee. I restarted everytime I did those steps, so no, i don't simply need to 'restart X'. I tried to run things using bumblebee, but bumblebee was like "can't access GPU driver". So I ran nvidia-settings, it said the drivers weren't in use, so I ran "sudo nvidia-xconfig", then restarted. Now, my login screen looks differently than it did before: it asks me if I want to load: "GNOME, GNOME - no effects, Cairo Dock - GNOME, System Default, or Ubuntu" when I log in, but WORST OF ALL: i no longer have any kind of GNOME/unity GUI. There are no title bars above any windows, no close/minimize/maximize buttons. The unity bar is gone, and will not show up when I call it. And the top status bar is also no longer there.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

  • Multiple volumetric lights

    - by notabene
    I recently read this GPU GEMS 3 article Volumetric Light Scattering as a Post-Process. I like the idea to add volumetric light property to realtime render i'm working on. Question is will it work for multiple lights? Our renderer uses one render pass per light and uses additive blending to sum incoming light. I'm mostly convinced that it have to work nice. Do you agree? Maybe there can be problem where light rays crosses each other.

    Read the article

  • Video capture Performance

    - by volting
    I have noticed high CPU utilization in a number of applications (except mplayer) which read from the embedded webcam on my laptop. Bizarrely CPU utilization varies proportionately to the level of illumination present. I know that that high CPU usage has nothing to do with rendering the video, as I have written a simple app using the OpenCV library to simply grab frames from the webcam, and cpu usage is still high. I think that mplayer might be using my GPU (and the other apps aren't), but since its not an issue with rendering, I dont think this explains anything. Cheese Low light --- ~12% CPU Bright Light ---- ~63% CPU Camorama Low light --- ~7% CPU Bright Light ---- ~30% CPU Opencv C++ library, (display in a single highgui window) Low light --- ~13% CPU Bright Light ---- ~40% CPU (same test on windows 7, 4-9%) Mplayer No problem, 1-2% regardless of light levels Note: If all I want't to do is capture a feed from my webcam I would use mplayer and forget about it, but I'm developing an application which uses the OpenCV to capture a video feed among other things, performance is important.

    Read the article

  • Timing Calculations for Opengl ES 2.0 draw calls

    - by Arun AC
    I am drawing a cube in OpenGL ES 2.0 in Linux. I am calculating the time taken for each frame using below function #define NANO 1000000000 #define NANO_TO_MICRO(x) ((x)/1000) uint64_t getTick() { struct timespec stCT; clock_gettime(CLOCK_MONOTONIC, &stCT); uint64_t iCurrTimeNano = (1000000000 * stCT.tv_sec + stCT.tv_nsec); // in Nano Secs uint64_t iCurrTimeMicro = NANO_TO_MICRO(iCurrTimeNano); // in Micro Secs return iCurrTimeMicro; } I am running my code for 100 frames with simple x-axis rotation. I am getting around 200 to 220 microsecs per frame. that means am i getting around (1/220microsec = 4545) FPS Is my GPU is that fast? I strongly doubt this result. what went wrong in the code? Regards, Arun AC

    Read the article

  • Problems after installing Ubuntu 11.10

    - by Andrew Orr
    I'm having trouble with Ubuntu 11.10. It has to do with nomodeset. After I boot into Ubuntu, it goes to a purple screen for about 10 seconds and then goes blank. After that nothing happens. I've read other people's questions about this and I know it has to do with enabling nomodeset. This worked for me when I was using the LiveCD mode, but now Ubuntu is permanently installed as a dual-boot system. Going into recovery mode doesn't work, pressing "e" in the boot loader and writing nomodeset after quiet splash doesn't work either. Holding shift any time it's booting doesn't work. I don't know what to do anymore. I have an HP Pavilion dv6 laptop with an AMD A6-3400M CPU, and my GPU is an AMD Radeon HD 6520G. I've never worked with Linux before so taking me through this step-by-step would be great. Thanks!

    Read the article

  • Unity Desktop Displays strange lines

    - by Alex Holsgrove
    Didn't quite know what title to give this problem, but hopefully the screenshot will explain more. I am running a Samsung R60+ laptop on Ubuntu 13.10 with a Radeon X1250 GPU. After I login and the Unity desktop shows, I can see these strange lines at the top of the screen. I presumed it was perhaps a driver issue and found this article to see if I could resolve the issue: https://help.ubuntu.com/community/RadeonDriver I cannot get on with Unity at all (where are all of the menus gone!) so perhaps reverting back to Gnome may be a solution in my case? I'd welcome any ideas please.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Disable ATI Radeon graphics card and use intel graphics (switcheroo unavailable)

    - by user92356
    So I have a HP Envy with ATI Radeon 5450 + intel switchable graphics. I think (though I'm not sure) that the Radeon is running right now on Ubuntu 12.04 (because my laptop is making too much noise when I'm doing something non gpu intensive like word processing, or web browsing). So what I want to do is disable the ATI Radeon and use the Intel instead. I looked around and it seems all the solutions use switcheroo, but I dont have it on my computer! I think this happened because I tried installing the proprietary driver (fglrx). Any and all help is 200% appreciated, thank you

    Read the article

  • can't see myself in Skype video call

    - by seb
    I'm running 12.04 and I've installed skype via the software centre. As with 11.10 everything works fine with 12.04 as well. There is only one thing that does not work. I can't see myself in Skype video calls. The video call works fine, I can see the other side the other side can see me. Buid in microphone works. If I click on 'show myself' during the video call nothing happens. I know that it works on Ubuntu in general as I had it working a while back on a different machine (Xubuntu 11.04). Could that be related to the GPU? I'm now on a intel/nvidia one. Any Ideas where I can hunt for some options or tweaking?

    Read the article

  • CUDA instructions ask to stop GDM but it doesn't exist

    - by Gabs
    I am trying to install and run some CUDA exemples in Ubuntu 12.04. First of all, I downloaded all .run files from http://developer.nvidia.com/cuda-downloads, then followed the instructions at http://developer.nvidia.com/nvidia-gpu-computing-, until I got hung up on the first step: Exit the GUI if you are in a GUI environment by pressing Ctrl-Alt-Backspace. Some distributions require you to press this sequence twice in a row; others have disabled it altogether in favor of a command such as sudo /etc/init.d/gdm stop . Still others require changing the system runlevel using a command such as /sbin/init 3 to exit the GUI. When I type the command sudo /etc/init.d/gdmstop, it returns: gdm command not found Can anybody help me exit my GUI in order to continue? Thank you in advance.

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • ACER Aspire V5-171 compatibility

    - by JamerTheProgrammer
    Im thinking about buying a V5-171 with an i3 in it. Im worried about secure boot though. I heard some people cant turn it off and it wont work... Im not shy to open it up and replace the hard drive with Ubuntu preinstalled. Im also worried about the wifi working. I have heard its been dropping out for people quite a bit along with also the trackpad not working. I dont mind replacing the wifi stick (if its even possible?) inside. Is the GPU (HD4000 i think) supported in ubuntu with full video accel? Thanks!

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >