Search Results

Search found 2101 results on 85 pages for 'nvidia geforce'.

Page 82/85 | < Previous Page | 78 79 80 81 82 83 84 85  | Next Page >

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

  • DirectX particle system. ConstantBuffer

    - by Liuka
    I'm new in DirectX and I'm making a 2D game. I want to use a particle system to simulate a 3D starfield, so each star has to set its own constant buffer for the vertexshader es. to set it's world matrix. So if i have 500 stars (that move every frame) i need to call 500 times VSsetconstantbuffer, and map/unmap each buffer. with 500 stars i have an average of 220 fps and that's quite good. My bottelneck is Vs/PsSetconstantbuffer. If i dont call this function i have 400 fps(obliviously nothing is display, since i dont set the position of the stars). So is there a method to speed up the render of the particle system?? Ps. I'm using intel integrate graphic (hd 2000-3000). with a nvidia (or amd) gpu will i have the same bottleneck?? If, for example, i dont call setshaderresource i have 10-20 fps more (for 500 objcets), that is not 180.Why does SetConstantBuffer take so long?? LPVOID VSdataPtr = VSmappedResource.pData; memcpy(VSdataPtr, VSdata, CszVSdata); context->Unmap(VertexBuffer, 0); result = context->Map(PixelBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &PSmappedResource); if (FAILED(result)) { outputResult.OutputErrorMessage(TITLE, L"Cannot map the PixelBuffer", &result, OUTPUT_ERROR_FILE); return; } LPVOID PSdataPtr = PSmappedResource.pData; memcpy(PSdataPtr, PSdata, CszPSdata); context->Unmap(PixelBuffer, 0); context->VSSetConstantBuffers(0, 1, &VertexBuffer); context->PSSetConstantBuffers(0, 1, &PixelBuffer); this update and set the buffer. It's part of the render method of a sprite class that contains a the vertex buffer and the texture to apply to the quads(it's a 2d game) too. I have an array of 500 stars (sprite setup with a star texture). Every frame: clear back buffer; draw the array of stars; present the backbuffer; draw also call the function update( which calculate the position of the sprite on screen based on a "camera class") Ok, create a vertex buffer with the vertices of each quads(stars) seems to be good, since the stars don't change their "virtual" position; so.... In a particle system (where particles move) it's better to have all the object in only one vertices array, rather then an array of different sprite/object in order to update all the vertices' position with a single setbuffer call. In this case i have to use a dynamic vertex buffer with the vertices positions like this: verticesForQuad={{ XMFLOAT3((float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 0.0f) }, { XMFLOAT3((float)halfDImensions.x-1+pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 1.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1.pos.y, 1.0f), XMFLOAT2(0.0f, 0.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1.pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(0.0f, 1.0f) }, ....other quads} where halfDimensions is the halfsize in pixel of a texture and pos the virtual position of a star. than create an array of verticesForQuad and create the vertex buffer ZeroMemory(&vertexDesc, sizeof(vertexDesc)); vertexDesc.Usage = D3D11_USAGE_DEFAULT; vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexDesc.ByteWidth = sizeof(VertexType)* 4*numStars; ZeroMemory(&resourceData, sizeof(resourceData)); resourceData.pSysMem = verticesForQuad; result = device->CreateBuffer(&vertexDesc, &resourceData, &CvertexBuffer); and call each frame Context->IASetVertexBuffers(0, 1, &CvertexBuffer, &stride, &offset); But if i want to add and remove obj i have to recreate the buffer each time, havent i?? There is a faster way? I think i can create a vertex buffer with a max size (es. 10000 objs) and when i update it set only the 250 position (for 250 onjs for example) and pass this number as the vertexCount to the draw function (numObjs*4), or i'm worng

    Read the article

  • ubuntu 10.04: wlan0 No such device

    - by AodhanOL
    I am using a Dell xps l702x and I am using ubuntu 10.04 because I need to use f77 (don't ask) and I wasn't able to get it working on later versions of ubuntu. I cannot use the wifi at all. I have tried to fix it and have had limited success (changed the output from rfkill list all from Hard blocked: yes, to hard blocked: no). Here is the output from rfkill list all aodhan@aodhan-laptop:~$ rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 2: dell-bluetooth: Bluetooth Soft blocked: no Hard blocked: no Here is the output from iwconfig: aodhan@aodhan-laptop:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. pan0 no wireless extensions. And here is the output from ifconfig -a: aodhan@aodhan-laptop:~$ ifconfig -a eth0 Link encap:Ethernet HWaddr 5c:f9:dd:3d:f2:f7 inet addr:192.168.1.17 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::5ef9:ddff:fe3d:f2f7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3417 errors:0 dropped:0 overruns:0 frame:0 TX packets:2894 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3317567 (3.3 MB) TX bytes:505049 (505.0 KB) Interrupt:30 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:720 (720.0 B) TX bytes:720 (720.0 B) pan0 Link encap:Ethernet HWaddr 7e:7e:e8:39:70:af BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And finally, the output from lspci: 00:00.0 Host bridge: Intel Corporation Device 0104 (rev 09) 00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Device 0116 (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 4 (rev b5) 00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 (rev b5) 00:1c.5 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 6 (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Device 1c4b (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation Device 1246 (rev a1) 03:00.0 Network controller: Intel Corporation Device 008a (rev 34) 04:00.0 USB Controller: NEC Corporation Device 0194 (rev 04) 0a:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) Please help me! Thank you in advance. Edit: Further info - this is a dual install with a windows 7 system as the other option. However, I can't access that until tomorrow (I won't have access to the windows 7 disc until then, and grub isn't letting me load it). Therefore, I can't be sure whether it still works on that or not. It used to, though.

    Read the article

  • Lag spikes at full CPU usage, lagy mouse, maybe video card

    - by Roberts
    My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now 01.04.2012 (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can). Update 02.04.2012 - 7:21 Forgot one thing, my hard disk is extrimly slow and I will upgrade it this week or next week so I will be installing same OS again. I am multi-tasker but I can't do much because of 1.8 GHz CPU and slow hard drive (Model ID - WDC WD800JD-60JRC0). The Windows Experience Index is back to normal. Actually "Spelu grafika" (Gaming graphics) are higher than month ago. During this test mouse was very lagy, but month ago there weren't any problems. WHY!?

    Read the article

  • RAID and Partitions, guidance Needed

    - by beauregarde
    Alright I have a Biostar TA790GX3A2+ Mobo 2x Seagate 750Gb Hard drive (with 2 different speeds) an X4 9750 A GeForce 9800GT and 2GB RAM Hardware Specs link text I want to configure my computer with partitions in various RAID arrays. The Partitions I know i want (disk letters are mostly for reference here) C: XP Boot D: XP Swap E: XP Run F: Games G: Data The Partitions I think I want (repeat caveat) H: small FAT for Win Legacy and DOS I: Linux J: Linux Swap K-?M?: Other Linux /whatever partitions N & O: Attic for D1 and D2 What I'd like to do, is have C: written on Disk 1 (D1),.. D: on D2,.. E: and F: striped on D1 & D2,.. G: mirrored or D1 & D2,.. I: on D2 (so i can just switch disc boot priority to open in Ubuntu),.. J: on D1,.. and H: somewhere low on D1 I am inexperienced with VMs, so i am unsure as to whether those run out of XP, or whether i need to reserve a primary partition for them. However, I think they would be preferable for testing new OS's to scheduling a partition for the same purpose. I'm also not married to XP, but -64 IS pretty important to me. QUestion Time 1) Ignoring the irrationality of it all, is such a configuration possible? If not, can some pseudo-approximation be achieved? 2) My RAID is software, isnt it? 3) How much should I short a 750GB HD? And should i use that space for my attics, or for my attics and something else, or for something else (.iso's perhaps?)? 4) if XP is striped on D1 & D2, will that interfere egregiously with my Swap writes on D2? If so, would striping both XP and Swap relieve (or at least mitigate) that issue? Should XP and Swap just be written normally on 2 different HDs? 5) Should I keep DL's and Drivers on E: (XP Run), F: (Games), or elsewhere? 6) Is 4GB enough for C:? 7) Is 30GB enough (or too much) for E:? 8) How much to reserve for the Linux and sub-Linux partitions? Also, where on the platter do you think i should put them? 9) Am I a fool to use FAT16 instead of FAT32 for H: because I'd rather run 95 than 98SE? If not, do you think 2GB or 4GB? 10) I cant predict what my Max Commit Charge will be, so recommendations for Pagefile size? 5GB? 12GB? 11) VMs, where do I run them? do they exacerbate anything? Would it be better to just emulate Linux, 95, and DOS? EC) What havent I considered that I really should? Notes: computer is mostly for playing games and watching media, though I wouldnt rule out the use of particularly blah-intensive anything.

    Read the article

  • Intermittent lockups, unable to diagnose in over a year

    - by Magsol
    Here's a real doosie; I may just give my firstborn child to whomever helps me solve this problem. In July 2008, I assembled what would be my desktop computer for graduate school. Here are the specs of the machine I built: Thermaltake 750W PSU Corsair Dominator 2x2GB 240-pin SDRAM Thermaltake Tower Asus P5K Deluxe Motherboard Intel Core 2 Quad Q9300 2.5GHz CPU 2 x GeForce 8600 GT WD Caviar Blue 640GB hard drive CD burner DVD burner Soon thereafter, I ordered a new motherboard (because I was an idiot; that first motherboard supported CrossFire, not SLI), an Asus P5N-D. I was originally running Windows XP SP3. Pretty much right into the start of the fall semester, my desktop would simply lock up after awhile. If my system was largely idling, it would be after 1-3 days. If was gaming, it often happened an hour or two into my gaming session, indicating a link to activity level. Here's where it started getting interesting. I started looking at the system temps. The CPU was warmer than it should have been (~60s C), so I purchased some more efficient cooling compound a way better cooler for it. Now it hardly goes over 40 C. Intel was even kind enough to swap it out for free, just to rule it out. Lockups continued. The graphics cards were also running pretty warm: about 60 C idling. Removing one of them seemed to improve stability a little bit...as in, it wouldn't lock up quite as frequently, but still always eventually locked up. But it didn't matter which card I used or removed, the lockups continued. I reverted back to the original motherboard, the P5K Deluxe. Lockups continued. I purchased an entirely new motherboard, eVGA's nForce 750i. Lockups continued. Ran memtest86+ over and over and over, with no errors. Even RMA'd the memory. Lockups continued. Replaced the PSU with a Corsair 750W PSU. Lockups continued. Tried disconnecting all IDE drives (HDDs are SATA). Lockups continued. Replaced both graphics cards with a single Radeon HD 4980. Average temps are now always around 50 C when idling, 60 C only when gaming. Lockups continued. Throughout the whole ordeal, the system has been upgraded from Windows XP SP3 to Vista 32-bit, to Vista 64-bit, and is now at Windows 7 64-bit. Lockups have occurred at every step along the way (each OS was in place for at least a few months before the next upgrade). Edit: By "upgrade" I mean clean install each time. In addition to those reformats, I have performed many, many other reformats of the system and a reinstall of whatever OS had been previously installed in an attempt to rectify this problem, to no avail./Edit When the system locks up, there's no blue screen, no reboot, no error message of any kind. It simply freezes in place until I hit the reset button. Very, very rarely, once Windows boots back up, the system informs me that Windows has recovered from an error, but it can never find the source aside from some piece of hardware. I've swapped out every component in this computer, and there are more fans in it than I care to count...though for the sake of completeness: top 80mm case fan (out) rear 80mm case fan (out) rear 120mm case fan (out) front 120mm case fan (in) side 250mm case fan (in) giant CPU fan on-board motherboard fan (the eVGA board) triple-fan memory setup (came with the memory) PSU internal fan another 120mm fan I stuck on the underside of the video card to keep hot air from collecting at the bottom of the case I'm truly out of ideas. ANY help at all would be oh-so-very GREATLY appreciated. Thank you!

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • How do I use the sed command to remove all lines between 2 phrases (including the phrases themselves

    - by fzkl
    I am generating a log from which I want to remove X startup output which looks like this: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.31-607-imx51 armv7l Ubuntu Current Operating System: Linux nvidia 2.6.33.2 #1 SMP PREEMPT Mon May 31 21:38:29 PDT 2010 armv7l Kernel command line: mem=448M@0M nvmem=64M@448M mem=512M@512M chipuid=097c81c6425f70d7 vmalloc=320M video=tegrafb console=ttyS0,57600n8 usbcore.old_scheme_first=1 tegraboot=nand root=/dev/nfs ip=:::::usb0:on rw tegra_ehci_probe_delay=5000 smp dvfs tegrapart=recovery:1b80:a00:800,boot:2680:1000:800,environment:3780:40:800,system:38c0:2bc00:800,cache:2f5c0:4000:800,userdata:336c0:c840:800 envsector=3080 Build Date: 23 April 2010 05:19:26PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jun 16 19:52:00 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Is there any way to do this without manually checking pattern for each line?

    Read the article

  • How to create platform independent 3D video on 3D TV via HDMI 1.4?

    - by artif
    I am writing a real-time, interactive 3D visualization program and at each point in the program, I can compute 2 images (bitmaps) that are meant to look 3D together by means of stereoscopy. How do I get my program to display the image pairs such that they look 3D on a 3D TV? Is there a platform independent way of accomplishing it? (By platform I mean independent of GPU brand, operating system, 3D TV vendor, etc.) If not, which is preferable-- to lock in by GPU, OS, or 3D TV? I suppose I need to be using an HDMI 1.4 cable with the 3D TV? HDMI 1.4 can encode stereoscopy via side-by-side method. But how do I send such an encoded signal to the monitor? What kind of libraries do I use for this sort of thing? Windows DirectShow? If DirectShow is correct, is there a cross platform equivalent available? If anyone asks, yes I have seen this question: http://stackoverflow.com/questions/2811350/generating-3d-tv-stereoscopic-output-programmatically. However, correct me if I am wrong, it does not appear to be what I'm looking for. I do not have an OpenGL or Direct3D program that generates polygons, for which a Nvidia card can do ad-hoc impromptu stereoscopy simply by rendering the scene from 2 slightly offset points of view and then displaying those 2 images on the monitor-- my program already has those image pairs and needs to display them (and they are not the result of rendering polygons). Btw, I have never done any major multimedia programming before and know very little about HDMI, Direct Show, 3D TVs, etc so pardon me if any parts of this question did not make any sense at all.

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • How to check for mip-map availability in OpenGL?

    - by Xavier Ho
    Recently I bumped into a problem where my OpenGL program would not render textures correctly on a 2-year-old Lenovo laptop with an nVidia Quadro 140 card. It runs OpenGL 2.1.2, and GLSL 1.20, but when I turned on mip-mapping, the whole screen is black, with no warnings or errors. This is my texture filter code: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); After 40 minutes of fiddling around, I found out mip-mapping was the problem. Turning it off fixed it: // glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); I get a lot of aliasing, but at least the program is visible and runs fine. Finally, two questions: What's the best or standard way to check if mip-mapping is available on a machine, aside from checking OpenGL versions? If mip-mapping is not available, what's the best work-around to avoid aliasing?

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

  • Compiling scalafx for Java 7u7 (that contains JavaFX 2.2) on OS X

    - by akauppi
    The compilation instructions of scalafx says to do: export JAVAFX_HOME=/Path/To/javafx-sdk2.1.0-beta sbt clean compile package make-pom package-src However, with the new packaging of JavaFX as part of the Java JDK itself (i.e. 7u7 for OS X) there no longer seems to be such a 'javafx-sdkx.x.x' folder. The Oracle docs say that JavaFX JDK is placed alongside the main Java JDK (in same folders). So I do: $ export JAVAFX_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk $ sbt clean [warn] Using project/plugins/ (/Users/asko/Sources/scalafx/project/plugins) for plugin configuration is deprecated. [warn] Put .sbt plugin definitions directly in project/, [warn] .scala plugin definitions in project/project/, [warn] and remove the project/plugins/ directory. [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins/project [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins [error] java.lang.NullPointerException [error] Use 'last' for the full log. Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? Am I doing something wrong or is scalafx not yet compatible with the latest Java release (7u7, JavaFX 2.2). What can I do? http://code.google.com/p/scalafx/ Addendum ..and finally (following Igor's solution below) sbt run launches the colorful circles demo easily (well, if one has a supported GPU that is). Oracle claims that "JavaFX supports graphic hardware acceleration on any Mac OS X system that is Lion or later" but I am inclined to think the NVidia powered Mac Mini I'm using does software rendering. A recent MacBook Air (core i7) is a complete different beast! :)

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85  | Next Page >