Search Results

Search found 955 results on 39 pages for 'gpu accelration'.

Page 9/39 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • PSU requirement question for my PC setup.

    - by user69474
    I understand that sometimes there may be a situation where the PSU is way more than required but in this case of mine, I'm not too sure. Sometimes when I play games, my computer will crash and restarts itself, 10 mins into the game. Once I received a message that says something like the power is overheating or something like that. Ok, so I have a 500W PSU. I have: 1x Internal DVD writer 1x SATA 250GB HD 1x Nvidia 8500 GT 2GB RAM. As I'm planning to get an additional 250GB SATA HD, I wonder if I need to increase my PSU as well -- in full knowledge of the previous crashes experienced before. Should I upgrade my PSU to 650W perhaps, or is that excessive?

    Read the article

  • New graphics card (GTX 760) slowing down entire PC

    - by Cayetano Gonçalves
    My new graphics card is making my PC totally unusable. It boots up really slowly, and when the windows screen comes on, the mouse lags really far behind. Nothing opens at a normal speed. However, when I put in my 5-year old graphics card, it all works fine. I'm currently using a Foxconn Renaissance LGA 1366 Intel X58 ATX Motherboard, Intel Motherboard Intel Core i7-950 Bloomfield 3.06GHz LGA 1366 130W Processor, and a EVGA SuperNOVA 850G2 80PLUS Gold Certified ATX12V/EPS12V 850W Power Supply . I know it can't be the power supply, because I just bought it today to try to fix the problem. I've also installed the newest version of BIOS available for my motherboard. I've also seen extreme variations in CPU while the new graphics card is in, and when the old graphics card is installed, it is much calmer. Any thoughts?

    Read the article

  • NVIDIA CUDA SDK Examples Compilation Unsupported Architecture 'computer_20'

    - by Andrew Bolster
    On compilation of the CUDA SDK, I'm getting a nvcc fatal : Unsupported gpu architecture 'compute_20' My toolkit is 2.3 and on a shared system (i.e cant really upgrade) and the driver version is also 2.3, running on 4 Tesla C1060s If it helps, the problem is being called in radixsort. It appears that a few people online have had this problem but i havent found anywhere that actually gives a solution.

    Read the article

  • SLI Artifact issues

    - by sloughk
    When I enable SLI I keep getting artifacts in random games and in Windows Aero sometimes . It doesn't seem to be about over-heating as it happens randomly, and sometimes whole computers just crashes (not blue-screen, just black gone...) My setup: Geforce 8800 GTS 320MB ( http://www.bfgtech.com/bfgr88512gtoce.aspx ) x 2 GIGABYTE EX58-UD4 Windows 7 RC PSU 800W I've got the latest NVIDIA drivers and F5 BIOS update (*SLI update*). When I disable SLI and use the cards separately they are working just fine even in games. Any idea why this might happen?

    Read the article

  • 912 stream processor available in OpenCL

    - by tugrul büyükisik
    I am thinking of assembling this system: AMD CPU (A8-3870 APU which has Radeon HD 6550D inside: 400 stream processors:xxx GFLOPS) nearly 110$ AMD Graphics card: HD 7750 (512 stream processors:819 GFLOPS peak performance) nearly 170$ Appropriate ram (1600MHz bus) Mainboard What GFLOPS level can I reach as a stable mode with using OpenCL and similar programs? Can I use all 912 stream processors at the same time? I am not trying to do a VS question. I need to know what could be better for scientific computing (%75 of the time) and gaming (%25 of the time) because I have a low budget. With "scientific calculations" I mean fluid dynamics/solid state physics simulating; with games I mean those that need openCL and PhysX.

    Read the article

  • Silverlight hardware-accelerated playback is greyed-out - How do I enable it?

    - by Not So Sharp
    I am trying to play Netflix videos (which only play via Silverlight), but they play choppy because Silverlight's hardware-accelerated playback is disabled. (video playback on WMP11 and VLC is flawless, so I know beyond certainty that my built-in video card's hardware is perfectly capable of hardware-accelerated playback) I have the latest & greatest Silverlight version: 5.1.10411.0 And I tried to "un-grey-it-out" via the Registry's GPUVideoDecodeEnabled and UpdateMode, but that didn't help. Is there any way to "un-grey-it-out"?

    Read the article

  • How does one go about fixing a pixelation issue?

    - by Tyler
    I've been having a lot of trouble with pixelation and also in-game display driver crashes. At first I thought the problem may have been my graphics card (Sapphire 5750) but I have RMA'd it three times and I'm still having trouble. So what I think it might be is my PSU, its 650Watt but its made by Azza so it was cheap and not made by a necessarily good company. Or maybe it could be the CPU (AMD Phenom Black Edition 3.4Ghz) It runs hot (60C) but it passes Everest's test. Someone told me it could be the memory (8GB DDR3) but I don't really see how RAM has anything to do with my problem. So what are some of you suggestions? Here is a video a made awhile ago of what is happening. http://www.metacafe.com/watch/4392213/ati_radeon_5750/ That was just a stress test most games tend to just freeze and crash or lock my system up.

    Read the article

  • eVGA GTX 550 Ti 1GB Super OC or eVGA GTX460 SE 1GB SuperClocked OC [closed]

    - by Tusk
    I'm stuck between ´eVGA GTX 550 Ti 1GB Super OC´ and ´eVGA GTX460 SE 1GB SuperClocked OC´. I know that 460 is better in performance, but does the 550 Ti has any newer feature, which makes it better? I'll mostly use it for HD movies and HD gaming (skyrim, la noire). Would appreciate opinions about which card is better, and if you have one of these, please provide something useful information. Thanks.

    Read the article

  • Monitor sometimes display weird colors and lines when opening a website that contains a video

    - by WEEYE
    This happens 1/50 times when I open a website that contains a video. I don't know if it happens when I open a flash video, but I am sure it has happened when I've opened a HTML5 video. When it happens I have about 2-3 seconds to press Alt + F4 (to close my browser). If I do it in time the screen will get back to normal. If I wait too long to press Alt + F4, maybe 3+ seconds, my screen won't get back to normal when I press Alt + F4. However, when I get stuck in the weird screen I can do all types of things on the computer. the whole computer does not freeze, because if I for example press Win + L I will hear the Windows logout sound. I have HD7870, Windows 7. http://tinypic.com/r/3519tf9/5

    Read the article

  • OpenACC : le standard de programmation parallèle par NVIDIA, accélérer les applications hybrides CPU/GPU avec les directives

    OpenACC : le nouveau standard de développement parallèle par NVIDIA Accélérer plus facilement les applications hybrides combinant CPU/GPU avec les directives En compagnie de Cray, PGI et avec le soutien de Caps ; NVidia a développé un nouveau standard ouvert pour la programmation parallèle. OpenACC est conçu pour permettre aux programmeurs d'exploiter facilement la puissance transformatrice de l'hétérogénéité des systèmes informatiques hybrides CPU/GPU (processeur graphique). Il trouve son indication auprès des programmeurs travaillant dans l'analyse de donnée, l'intelligence artificielle et la physique entre autres domaines scientifiques et techniques. [IMG]http://...

    Read the article

  • Linux 3.10 améliore la mise en cache pour les SSD et offre de meilleures performances pour le CPU et le GPU, la version stable disponible

    Linux 3.10 améliore la mise en cache pour les SSD et offre de meilleures performances pour le CPU et le GPU, la version stable disponibleComme il est de coutume, Linus Torvalds a annoncé la publication de la version stable du noyau Linux 3.10.Cette nouvelle mouture, qui sort pratiquement deux mois après son prédécesseur, se distingue essentiellement par une meilleure prise en charge des disques SSD, le support de Radeon et des améliorations pour le CPU et GPU.Développée pendant plus d'un an, la technologie de mise en cache SSD « block layer cache » (Bcache) a été intégrée à Linux 3.10. Cette fonctionnalité peut être utilisée pour configurer un disque comme mémoire cache pour un autre disque pl...

    Read the article

  • What is the "don't force full gpu scalling" configuration in the newest Nvida driver settings?

    - by wild_oscar
    I have a Zotac Ion board on a computer running Linux ad I had the driver version 295.xx. I was trying to run a 1080 video on XBMC and, because playback was a bit sluggish, part of my (unsuccessful) attempt at making it run smoothly was to upgrade the Nvidia driver. The problem is that the computer is connected by HDMI to a Pioneer A/V system - which in turn is connected to the TV (also through HDMI). When I created this set up I didn't get any image on my tv until I Unchecked "force full gpu scaling" in "Flat Panel Scaling". After upgrading the Nvidia driver I no longer have image. Upon investigating, I saw on the release changelog: Removed Flat Panel Scaling configurability in nvidia-settings. Any desired scaling can be configured through the new "ViewPortIn" and "ViewPortOut" MetaMode attributes. I know very little about this, however, so I'm lost. What would the correct configuration be with these new ViewPortIn/Out options to achieve the same result as "unchecking force full gpu scalling"?

    Read the article

  • Which is faster: creating a detailed mesh before execution or tessellating?

    - by Nick Udell
    For simplicity of the problem let's consider spheres. Let's say I have a sphere, and before execution I know the radius, the position and the triangle count. Let's also say the triangle count is sufficiently large (e.g. ~50k triangles). Would it be faster generally to create this sphere mesh before hand and stream all 50k triangles to the graphics card, or would it be faster to send a single point (representing the centre of the sphere) and use tessellation and geometry shaders to build the sphere on the GPU? Would it still be faster if I had 100 of these spheres in different positions? Can I use hull/geometry shaders to create something which I can then combine with instancing?

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • How to synchronize cuda threads when they are in the same loop and we need to synchronize them to ex

    - by Vickey
    Hi all, I have written a code and Now I want to implement this on cuda GPU but I'm new to synchronization so please help me with this, It's little urgent to me. Below I'm presenting the code and I want to that LOOP1 to be executed by all threads (heance I want to this portion to take advantage of cuda and the remaining portion (the portion other from the LOOP1) is to be executed by only a single thread. do{ point_set = master_Q[(*num_mas) - 1].q; List* temp = point_set; List* pa = point_set; if(master_Q[num_mas[0] - 1].max) max_level = (int) (ceilf(il2 * log(master_Q[num_mas[0] - 1].max))); *num_mas = (*num_mas) - 1; while(point_set){ List* insert_ele = temp; while(temp){ insert_ele = temp; if((insert_ele->dist[insert_ele->dist_index-1] <= pow(2, max_level-1)) || (top_level == max_level)){ if(point_set == temp){ point_set = temp->next; pa = temp->next; } else{ pa->next = temp->next; } temp = NULL; List* new_point_set = point_set; float maximum_dist = 0; if(parent->p_index != insert_ele->point_index){ List* tmp = new_point_set; float *b = &(data[(insert_ele->point_index)*point_len]); **LOOP 1:** while(tmp){ float *c = &(data[(tmp->point_index)*point_len]); float sum = 0.; for(int j = 0; j < point_len; j+=2){ float d1 = b[j] - c[j]; float d2 = b[j+1] - c[j+1]; d1 *= d1; d2 *= d2; sum = sum + d1 + d2; } tmp->dist[tmp->dist_index] = sqrt(sum); if(maximum_dist < tmp->dist[tmp->dist_index]) maximum_dist = tmp->dist[tmp->dist_index]; tmp->dist_index = tmp->dist_index+1; tmp = tmp->next; } max_distance = maximum_dist; } while(new_point_set || insert_ele){ List* far, *par, *tmp, *tmp_new; far = NULL; tmp = new_point_set; tmp_new = NULL; float level_dist = pow(2, max_level-1); float maxdist = 0, maxp = 0; while(tmp){ if(tmp->dist[(tmp->dist_index)-1] > level_dist){ if(maxdist < tmp->dist[tmp->dist_index-1]) maxdist = tmp->dist[tmp->dist_index-1]; if(tmp == new_point_set){ new_point_set = tmp->next; par = tmp->next; } else{ par->next = tmp->next; } if(far == NULL){ far = tmp; tmp_new = far; } else{ tmp_new->next = tmp; tmp_new = tmp; } if(parent->p_index != insert_ele->point_index) tmp->dist_index = tmp->dist_index - 1; tmp = tmp->next; tmp_new->next = NULL; } else{ par = tmp; if(maxp < tmp->dist[(tmp->dist_index)-1]) maxp = tmp->dist[(tmp->dist_index)-1]; tmp = tmp->next; } } if(0 == maxp){ tmp = new_point_set; aloc_mem[*tree_index].p_index = insert_ele->point_index; aloc_mem[*tree_index].no_child = 0; aloc_mem[*tree_index].level = max_level--; parent->children_index[parent->no_child++] = *tree_index; parent = &(aloc_mem[*tree_index]); tree_index[0] = tree_index[0]+1; while(tmp){ aloc_mem[*tree_index].p_index = tmp->point_index; aloc_mem[(*tree_index)].no_child = 0; aloc_mem[(*tree_index)].level = master_Q[(*cur_count_Q)-1].level; parent->children_index[parent->no_child] = *tree_index; parent->no_child = parent->no_child + 1; (*tree_index)++; tmp = tmp->next; } cur_count_Q[0] = cur_count_Q[0]-1; new_point_set = NULL; } master_Q[*num_mas].q = far; master_Q[*num_mas].parent = parent; master_Q[*num_mas].valid = true; master_Q[*num_mas].max = maxdist; master_Q[*num_mas].level = max_level; num_mas[0] = num_mas[0]+1; if(0 != maxp){ aloc_mem[*tree_index].p_index = insert_ele->point_index; aloc_mem[*tree_index].no_child = 0; aloc_mem[*tree_index].level = max_level; parent->children_index[parent->no_child++] = *tree_index; parent = &(aloc_mem[*tree_index]); tree_index[0] = tree_index[0]+1; if(maxp){ int new_level = ((int) (ceilf(il2 * log(maxp)))) +1; if (new_level < (max_level-1)) max_level = new_level; else max_level--; } else max_level--; } if( 0 == maxp ) insert_ele = NULL; } } else{ if(NULL == temp->next){ master_Q[*num_mas].q = point_set; master_Q[*num_mas].parent = parent; master_Q[*num_mas].valid = true; master_Q[*num_mas].level = max_level; num_mas[0] = num_mas[0]+1; } pa = temp; temp = temp->next; } } if((*num_mas) > 1){ List *temp2 = master_Q[(*num_mas)-1].q; while(temp2){ List* temp3 = master_Q[(*num_mas)-2].q; master_Q[(*num_mas)-2].q = temp2; if((master_Q[(*num_mas)-1].parent)->p_index != (master_Q[(*num_mas)-2].parent)->p_index){ temp2->dist_index = temp2->dist_index - 1; } temp2 = temp2->next; master_Q[(*num_mas)-2].q->next = temp3; } num_mas[0] = num_mas[0]-1; } point_set = master_Q[(*num_mas)-1].q; temp = point_set; pa = point_set; parent = master_Q[(*num_mas)-1].parent; max_level = master_Q[(*num_mas)-1].level; if(master_Q[(*num_mas)-1].max) if( max_level > ((int) (ceilf(il2 * log(master_Q[(*num_mas)-1].max)))) +1) max_level = ((int) (ceilf(il2 * log(master_Q[(*num_mas)-1].max)))) +1; num_mas[0] = num_mas[0]-1; } }while(*num_mas > 0);

    Read the article

  • Which virtualization solutions are available for non-x86 platforms?

    - by asmaier
    Are their any virtualization solutions like Xen, KVM and VMWare ESX available for non-x86 platform? Especially I'm interested if there are solutions available or if Xen, KVM can be made to run on the platforms POWER5/6/7 PowerPC Itanium 64 NEC SX-8/9 Cray X2 BlueGene What are your experiences? Is it possible to virtualize GPUs like Nvidias Tesla/Fermi?

    Read the article

  • CUDA: injecting my own PTX function?

    - by shoosh
    I would like to be able to use a feature in PTX 1.3 which is not yet implemented it the C interface. Is there a way to write my own function in PTX and inject into an existing binary? The feature I'm looking for is getting the value of %smid

    Read the article

  • Game of life in F# with accelerator

    - by jpalmer
    I'm trying to write life in F# using accelerator v2, but for some odd reason my output isn't square despite all my arrays being square - It appears that everything but a rectangular area in the top left of the matrix is being set to false. I've got no idea how this could be happening as all my operations should treat the entire array equally. Any ideas? open Microsoft.ParallelArrays open System.Windows.Forms open System.Drawing type IPA = IntParallelArray type BPA = BoolParallelArray type PAops = ParallelArrays let RNG = new System.Random() let size = 1024 let arrinit i = Array2D.init size size (fun x y -> i) let target = new DX9Target() let threearr = new IPA(arrinit 3) let twoarr = new IPA(arrinit 2) let onearr = new IPA(arrinit 1) let zeroarr = new IPA(arrinit 0) let shifts = [|-1;-1|]::[|-1;0|]::[|-1;1|]::[|0;-1|]::[|0;1|]::[|1;-1|]::[|1;0|]::[|1;1|]::[] let progress (arr:BPA) = let sums = shifts //adds up whether a neighbor is on or not |> List.fold (fun (state:IPA) t ->PAops.Add(PAops.Cond(PAops.Rotate(arr,t),onearr,zeroarr),state)) zeroarr PAops.Or(PAops.CompareEqual(sums,threearr),PAops.And(PAops.CompareEqual(sums,twoarr),arr)) //rule for life let initrandom () = Array2D.init size size (fun x y -> if RNG.NextDouble() > 0.5 then true else false) type meform () as self= inherit Form() let mutable array = new BoolParallelArray(initrandom()) let timer = new System.Timers.Timer(1.0) //redrawing timer do base.DoubleBuffered <- true do base.Size <- Size(size,size) do timer.Elapsed.Add(fun _ -> self.Invalidate()) do timer.Start() let draw (t:Graphics) = array <- array |> progress let bmap = new System.Drawing.Bitmap(size,size) target.ToArray2D array |> Array2D.iteri (fun x y t -> if not t then bmap.SetPixel(x,y,Color.Black)) t.DrawImageUnscaled(bmap,0,0) do self.Paint.Add(fun t -> draw t.Graphics) do Application.Run(new meform())

    Read the article

  • Error while compiling Hello world program for CUDA

    - by footy
    I am using Ubuntu 12.10 and have sucessfully installed CUDA 5.0 and its sample kits too. I have also run sudo apt-get install nvidia-cuda-toolkit Below is my hello world program for CUDA: #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ __global__ void kernel(void) { } /* MAIN PROGRAM BEGINS */ int main(void) { /* Dg = 1; Db = 1; Ns = 0; S = 0 */ kernel<<<1,1>>>(); /* PRINT 'HELLO, WORLD!' TO THE SCREEN */ printf("\n Hello, World!\n\n"); /* INDICATE THE TERMINATION OF THE PROGRAM */ return 0; } /* MAIN PROGRAM ENDS */ The following error occurs when I compile it with nvcc -g hello_world_cuda.cu -o hello_world_cuda.x /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `main': /home/adarshakb/Documents/hello_world_cuda.cu:16: undefined reference to `cudaConfigureCall' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__cudaUnregisterBinaryUtil': /usr/include/crt/host_runtime.h:172: undefined reference to `__cudaUnregisterFatBinary' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__sti____cudaRegisterAll_51_tmpxft_000033f1_00000000_4_hello_world_cuda_cpp1_ii_b81a68a1': /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFatBinary' /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFunction' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `cudaError cudaLaunch<char>(char*)': /usr/lib/nvidia-cuda-toolkit/include/cuda_runtime.h:958: undefined reference to `cudaLaunch' collect2: ld returned 1 exit status I am also making sure that I use gcc and g++ version 4.4 ( As 4.7 there is some problem with CUDA)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >