Search Results

Search found 955 results on 39 pages for 'gpu accelration'.

Page 4/39 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Recommendations for Open Source Parallel programming IDE

    - by Andrew Bolster
    What are the best IDE's / IDE plugins / Tools, etc for programming with CUDA / MPI etc? I've been working in these frameworks for a short while but feel like the IDE could be doing more heavy lifting in terms of scaling and job processing interactions. (I usually use Eclipse or Netbeans, and usually in C/C++ with occasional Java, and its a vague question but I can't think of any more specific way to put it)

    Read the article

  • Drawing particles with CPU instead of GPU (XNA)

    - by Helix
    I'm trying out modifications to the following particle system. http://create.msdn.com/en-US/education/catalog/sample/particle_3d I have a function such that when I press Space, all the particles have their positions and velocities set to 0. for (int i = 0; i < particles.GetLength(0); i++) { particles[i].Position = Vector3.Zero; particles[i].Velocity = Vector3.Zero; } However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0). As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything. I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

    Read the article

  • How to disable discrete GPU using NVIDIA drivers?

    - by penzoiders
    I have a DELL studio XPS 13 (aka 1340) as of 12.04 most things run smoothly out of the box, but I have some power draining and warmness issues (if not to be called terrible heat issues) The system came with a NVIDIA GeForce 9500M (which has Hybrid SLI) and it shows up in "lspci" as these 2 cards 02:00.0 VGA compatible controller: NVIDIA Corporation G98 [GeForce 9200M GS] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation C79 [GeForce 9400M G] (rev b1) I had to install nvidia-current over noveau driver 'cause noveau does freeze the system after suspension. By installing nvidia-current and running nvidia-xconfig the resume process after suspension is fixed. By the way both with nvidia-current and noveau the system drains a lot of battery and heats up a lot. I suppose this is because the discrete GPU is always on. I don't really need 3D graphics on this system, if not the minimal to run unity and compiz for window management. So my question is: How do I disable, using nvidia-current, the discrete GPU 9200M and use only the integrated one 9400M? notes: In BIOS I have no option to disable discrete GPU This I think is not applicable because of the suspension-freeze issue (with noveau): https://help.ubuntu.com/community/HybridGraphics I've found this but I don't know which --sli option I should choose to fit my needs: http://manpages.ubuntu.com/manpages/hardy/man1/nvidia-xconfig.1.html My system has not optimus or cuda, but anyone can tell me if bumblebee can work for me?

    Read the article

  • SDK Android : améliorations de performances pour l'émulateur avec le support du GPU

    SDK Android : améliorations de performances pour l'émulateur avec le support du GPU Google vient de publier une mise à jour du kit de développement (SDK) pour Android. Au coeur de cette nouvelle version, des améliorations de performances et de nouvelles fonctionnalités pour l'émulateur Android qui viennent résoudre le problème de lenteur de l'environnement qui ne cadrait plus avec les nouvelles versions de l'OS. L'outil permettant aux développeurs de tester leurs applications Android sur un ordinateur de bureau prend désormais en charge le GPU pour Android 4.X. Cette nouveauté va permettre de profiter de l'accélération matérielle pour avoir une simulation plus réaliste de...

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • .NET access to the GPU for compute purposes

    - by Daniel Moth
    In the distant past I talked about GPGPU and Microsoft's then approach of DirectCompute. Since then of course we now have C++ AMP coming out with Visual Studio 11, so there is a mainstream easier way for developers to access the GPU for compute purposes, using C++. The question occasionally arises of how can a .NET developer access the GPU for compute purposes from their C# (or VB) code. The answer is by interoping from the managed code to a native DLL and in the native DLL use C++ AMP. As a long term .NET developer myself, I can tell you this is straightforward. Sure, there could have been a managed wrapper for C++ AMP, but honestly that is the reason we have interop – it doesn't make much sense to invest resources to solve a problem that is already solved (most developer customers would prefer investments in other areas of Visual Studio!). Besides, interoping from C# to C++ is much easier than interoping to some of the other older approaches of GPGPU programming ;-) To help you get started with the interop approach, Igor Ostrovsky has previously shared the "Hello World" version of interoping from C# to C++ AMP in his blog post: How to use C++ AMP from C# …we then were asked specifically about how to interop from C# to C++ AMP in a Metro style application on Windows 8, so Igor delivered again with this post: How to use C++ AMP from C# using WinRT Have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Disable discrete AMD GPU

    - by Smajl
    My notebook has two graphic cards and it suffers from severe overheating after installing Ubuntu (no problem with Windows 7 on the same machine). I figured out that the problem may be in the graphic card and I would like to disable the discrete one. I followed some tutorials on this topic (for example http://planetoss.com/articles/how-to-disable-the-discrete-amd-graphics-card-in-linux/). But the problem is, that after executing the commands, nothing really happens and both GPU are still running. Here is what I have done: smajl@smajl-mini:~$ sudo chown smajl /sys/kernel/debug/vgaswitcheroo/switchsmajl@smajl-mini:~$ echo IGD > /sys/kernel/debug/vgaswitcheroo/switch smajl@smajl-mini:~$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch 0:IGD:+:DynPwr:0000:01:05.0 1:DIS-Audio: :Pwr:0000:02:00.1 2:DIS: :DynPwr:0000:02:00.0 smajl@smajl-mini:~$ echo OFF > /sys/kernel/debug/vgaswitcheroo/switch smajl@smajl-mini:~$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch 0:IGD:+:DynPwr:0000:01:05.0 1:DIS-Audio: :Pwr:0000:02:00.1 2:DIS: :DynPwr:0000:02:00.0 What am I missing here? Also, more on the overheating topic: 1) Installed TLP 2) updated system 3) set power setting mode to "power save" ...and nothing helps Tried same thing with Linux Mint without success. Is there anything else to try if I manage to disable the second GPU and the problem preserves? Otherwise I would have to get back to win in order not to melt my laptop.. :-/

    Read the article

  • GPU based procedual terrain borders?

    - by OnePie
    I'm working on a game that preferibly should feature a combination of designed and procedually generated terrain where the designer specifies in somewhat detailed terms what type of terrain a given area will have (grasslands, forest etc...) and then a precedual algorithm takes care of the rest. I'm not talking about minecraft style biomoes, but rather the game map for a strategy game. Each 'area' will not take up that much of the screen, and thus be more akin to a tile whose texture is procedually generated. While procedually generating terrain textures on the GPU are not that difficult, the hard part is making the borders between them look good. Currently, the 'tiles' are large enough to be visible (due to memory constraints mainly, we are talking planetary sized textures for a game taking place in space and on a continental ground view with seamless transitions between them) and creating good borders between them with an algorithm that is fast enough to be useful has proven difficult. Sampling the n-surrounding pixels and using the combiened result did not yield very good borders and was fairly slow on the GPU to boot (ca 12ms for me, that is without any lighning or shading and with very simple terrain texture shaders). So are there any practical known methods to solve this problem?

    Read the article

  • What are the GPU requirements for XNA 4.0?

    - by Nate Koppenhaver
    I tried to build a sample application using XNA, but I got an error saying that Pixel Shader 1.1 was required, so I got a used Radeon X300 GPU that supports Pixel Shader. I tried to build it again, but I got another error saying that "Your current graphics card does not support the XNA HiDef profile" and would not build. Since that card seems to not be compatible, I guess I need to buy another one. What features should I look for to make sure that it's compatible with XNA?

    Read the article

  • Yellow Dog Enterprise Linux for GPU computing

    <b>The H Open:</b> "The Japanese Fixstars Corporation, which specialises in software for the Cell processors, has announced the release of Yellow Dog Enterprise Linux (YDEL) 6.2 for CUDA, the first enterprise Linux OS optimised for GPU computing."

    Read the article

  • La bêta de Chrome 10 est disponible avec un nouveau moteur JavaScript et l'accélération GPU

    La bêta de Chrome 10 est disponible Avec un nouveau moteur JavaScript et l'accélération GPU Google vient de mettre à la disponible des utilisateurs la bêta de Chrome 10. Dans cette nouvelle version, Google améliore encore la vitesse d'exécution du code JavaScript avec l'introduction d'une nouvelle version de sa machine virtuelle JavaScript V8 CrankShaft. CrankShaft apporte une hausse de l'exécution du JavaScript de 66% sur le benchmark V8 par rapport à la version finale de Chrome 9. [IMG]https://lh4.googleusercontent.com/PAxHeU25m_QWU83fp_RAPnrtAaWN_m8XOplzXtMZQW7g5wwGEetXbSmje_y2uZBhZjuaNvJCf6kGPHPSehn0z80mi5h1srPdtpJxpP4wfkqr4uoHTnRoEx2EyPOsx4nw[/IMG]...

    Read the article

  • vgaswitcheroo - switch GPU based on load?

    - by Primož Kralj
    Is it possible to make vgaswitcheroo switch between GPU based on load? For example, when computer would be in idle it would turn off discrete and use only integrated; when enough load would accumulate on it it would automatically turn on discrete graphics and disable integrated. My second question is whether there is any GUI applet or something that I could use to manually switch between GPUs - instead of manually echoing in switch file? Edit: I found answer to last question at https://help.ubuntu.com/community/HybridGraphics

    Read the article

  • Generating and rendering not point-like particles on GPU

    - by TravisG
    Specifically I'm talking about particles as seen (for example) in the UE4 dev video here. They're not just points and seem to have a nice shape to them that seems to follow their movement. Is it possible to create these kinds of particles (efficiently) completely on the GPU (perhaps through something like motion? Or is the only (or most efficient) way to just create a small particle texture and render small quads for each particle?

    Read the article

  • Bumblebee [ERROR]Cannot access secondary GPU - error: [XORG]

    - by Lunchbox
    Though this may seem like a duplicate question, none of the suggestions I've seen have worked for me, however nearly all posters get good results. I'll start with hardware: Metabox W350ST notebook Intel Core i7 4700 16GB RAM GTX 765M (with Optimus) 128GB SSD 1TB SSHD My initial error output when trying to optirun a game is: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [133.973920] [ERROR]Aborting because fallback start is disabled. If anything else is needed to troubleshoot this just let me know. Adding bumblebee.conf: # Configuration file for Bumblebee. Values should **not** be put between quotes ## Server options. Any change made in this section will need a server restart # to take effect. [bumblebeed] # The secondary Xorg server DISPLAY number VirtualDisplay=:8 # Should the unused Xorg server be kept running? Set this to true if waiting # for X to be ready is too long and don't need power management at all. KeepUnusedXServer=false # The name of the Bumbleblee server group name (GID name) ServerGroup=bumblebee # Card power state at exit. Set to false if the card shoud be ON when Bumblebee # server exits. TurnCardOffAtExit=false # The default behavior of '-f' option on optirun. If set to "true", '-f' will # be ignored. NoEcoModeOverride=false # The Driver used by Bumblebee server. If this value is not set (or empty), # auto-detection is performed. The available drivers are nvidia and nouveau # (See also the driver-specific sections below) Driver=nvidia # Directory with a dummy config file to pass as a -configdir to secondary X XorgConfDir=/etc/bumblebee/xorg.conf.d ## Client options. Will take effect on the next optirun executed. [optirun] # Acceleration/ rendering bridge, possible values are auto, virtualgl and # primus. Bridge=auto # The method used for VirtualGL to transport frames between X servers. # Possible values are proxy, jpeg, rgb, xv and yuv. VGLTransport=proxy # List of paths which are searched for the primus libGL.so.1 when using # the primus bridge PrimusLibraryPath=/usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus # Should the program run under optirun even if Bumblebee server or nvidia card # is not available? AllowFallbackToIGC=false # Driver-specific settings are grouped under [driver-NAME]. The sections are # parsed if the Driver setting in [bumblebeed] is set to NAME (or if auto- # detection resolves to NAME). # PMMethod: method to use for saving power by disabling the nvidia card, valid # values are: auto - automatically detect which PM method to use # bbswitch - new in BB 3, recommended if available # switcheroo - vga_switcheroo method, use at your own risk # none - disable PM completely # https://github.com/Bumblebee-Project/Bumblebee/wiki/Comparison-of-PM-methods ## Section with nvidia driver specific options, only parsed if Driver=nvidia [driver-nvidia] # Module name to load, defaults to Driver if empty or unset KernelDriver=nvidia PMMethod=auto # colon-separated path to the nvidia libraries LibraryPath=/usr/lib/nvidia-current:/usr/lib32/nvidia-current # comma-separated path of the directory containing nvidia_drv.so and the # default Xorg modules path XorgModulePath=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules XorgConfFile=/etc/bumblebee/xorg.conf.nvidia ## Section with nouveau driver specific options, only parsed if Driver=nouveau [driver-nouveau] KernelDriver=nouveau PMMethod=auto XorgConfFile=/etc/bumblebee/xorg.conf.nouveau DRIVER VERSION - Output of jockey-text -l: nvidia_304_updates - nvidia_304_updates (Proprietary, Enabled, Not in use)

    Read the article

  • NVIDIA error "fallen off the bus"

    - by yurividal
    i have been having a Serious Issue with my LG notebook and its Nvidia Geforce 310M GPU. It usualy (99% of the time) happens when i leave the computer idle for a while, but it has also happened sometimes while i was using the PC. Suddenly, (usualy when computer is idle) the screen goes black, and the pc freezes completely on the black screen. (not even ping responses). The only sollution is to Hard Reset the machine. When analizing the syslog, i see the following error: Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510073] NVRM: GPU at 0000:01:00.0 has fallen off the bus. Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510087] NVRM: GPU at 0000:01:00.0 has fallen off the bus. Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510157] delay: estimated 354, actual 1 Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510173] delay: estimated 353, actual 0 I have already tryed different versions of the Nvidia Drivers, and also tryed removing each of my 2 DDR3 memories. The problem does not seem to be hardware, because when i boot into windows 7, it works normaly, for days. I am desperate with this problem, because it makes my Ubuntu practicaly unusable. Thanks in advance, Yuri

    Read the article

  • Drawing a textured triangle with CPU instead of GPU

    - by Jenko
    I understand the benefits of GPU rendering and such, but for a certain limited application I need to render textured triangles purely using CPU. I've built a 3D engine capable of object handling, transform, projection, culling and the likes ... now all I need is a little code snippet that draws a single textured triangle onto a bitmap... any language accepted! Inputs: Texture bitmap, Triangle U/V/W coords, Triangle X/Y screen coords Output: The textured triangle drawn at the given screen coords I've currently been using a platform function to draw triangles to screen, but I'm looking to handle it myself to speeden up the process.

    Read the article

  • ubuntu cant be installed as it says GPU lock

    - by piyush bhardwaj
    i have win 7 running on my system i just wanna move to ubuntu 12.10 desktop version 36 bit but when i try to install the CD gets loaded start screen comes but suddenly texts line appears and screen goes black the text are - [189.463934] [drm] [noveau] 0000:02:00.0: GPU LOCK- switching to software fbcon [194.248014] [drm] [noveau] 0000:02:00.0: failed to idle channel 1 [199.504009] [drm] [noveau] 0000:02:00.0: failed to idle channel 3 and it goes on.....like this iam new to the linux system so dnt know much abt it ..so if any body can help me to solve my prblm.. my laptop is of HCL and it has 3 gb ram or 350 gb hard disk with nvidia graphic card ..it has 2 ghz pentium dual core it was manufactured in 2009.. so if any body can help i would be greatful and can be able to join ubuntu..

    Read the article

  • /usr/share/apport/apport-gpu-error-intel.py Error

    - by gopherballs
    I was on Ubuntu 12.10 and ran the system updates and after the re-install, the system became unstable and kept prompting me with the message "System Issue" and this as the error per Ubuntu "/usr/share/apport/apport-gpu-error-intel.py". After I tried re-starting, the issue is still appearing and the system ultimately freezes (requiring a hard reset). I tried a clean install of 12.10 and the system works fine until the newest updates are applied...it then returns to the same messages. I then tried a clean install of 12.04 LTS and just like 12.10 it works until the latest updates are applied. On 12.04, I tried to use the previous kernel - going from 3.5.0-26 to 3.5.0-23 and the issue is still there. Currently, I'm sitting on 12.04 LTS clean install without doing the updates and the system is working. Does anyone have an idea how to fix this? Thanks.

    Read the article

  • GPU On-the-fly encoding of video through Logitech HD Webcam C510

    - by Ashfame
    Originally asked here but I edited that to move this question as a different one on suggestion by a fellow member. I read that I should have a Core2Duo 2.2Ghz for 720p but I have a 2.0Ghz one, would it be possible for me to first record it and then encode it after recording if my processor really start giving issue when doing on-the-fly encoding? I also have a ATI HD 4850 512MB card, if it can help in encoding on-the-fly or is there a chance that my graphics card alone can handle it and those specs were just for a system without a graphics card? I believe so. Also, I got no worries in dealing with console, if I have to do some of the things above in terminal. Other possible significant details: I have a dual screen setup 29" (1360X768) & 22" (1680X1050) which might be using some good power from GPU and I have 2GB DDR2 800Mhz RAM.

    Read the article

  • Asus 1215n GPU driver/s don't give me a "full" OS experience

    - by AFD
    I'm use to not having specific drivers from a manufacture on my laptop when running a Linux OS and that has always been fine - there's been adequate FOSS drivers for my needs and it hasn't ruined any of my OS experience. When I bought an Asus 1215n one of the upsides to the hardware seemed to be the switchable GPU that could give lots of performance or lots more battery life and would switch on-the-fly... with Windows of course. Seems that the Nvidia driver are crap and people advise not installing them. I have some sort of workaround for vga_switcharoo (?) and the on-the-fly nature of the GPUs has turned in to a manual one :( The worst bit though (aside from shorter battery life) is the web experience with HTML5. If I visit Mozilla's Web O'Wonder site I'm told I don't have WebGL working due to driver issues. This really blows - is it possible that proprietary drivers can now ruin my web experience too?!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >