Search Results

Search found 955 results on 39 pages for 'gpu'.

Page 16/39 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Ubutu 14.04 triple screen, third screen black and X cursor

    - by Horse
    I am having some issues getting my third screen working properly. I had triple screens working fine on 12.04, using 2 nvidia cards. Did a fresh install of 14.04 and having no end of problems getting it working. It either will just be disabled, or the screen is black with the cursor as an X. I can only enable it from the nvidia server settings tool. The Ubuntu native display settings won't even show the 3rd screen. I tried copying the xorg.conf from my old install, which upon restarting X worked fine on the login screen, but then it just sat there after I logged in and didn’t do anything (mouse was still working). I am using gnome-session-fallback instead of unity if that makes any difference. Still having these issues if I try unity though. How do I get my 3rd screen working and displaying a desktop? Here is my current xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildd@roseapple) Mon Feb 3 15:07:22 UTC 2014 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 0.0 - 0.0 VertRefresh 0.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" # Removed Option "SLI" "Off" # Removed Option "BaseMosaic" "off" # Removed Option "metamodes" "GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-2: nvidia-auto-select +0+0, GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-3: nvidia-auto-select +1280+0, GPU-82e96214-175e-5e6a-218c-5bdbc948daf2.DVI-I-1: nvidia-auto-select +3200+0" # Removed Option "SLI" "off" # Removed Option "BaseMosaic" "on" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "nvidia-auto-select +0+0" # Removed Option "metamodes" "DVI-I-3: nvidia-auto-select +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Here is my old 'working in 12.04' xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 310.19 ([email protected]) Thu Nov 8 02:08:55 PST 2012 Section "ServerLayout" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen2" Screen 2 "Screen2" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Apple Cinema HD" HorizSync 74.0 - 74.6 VertRefresh 59.9 - 60.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-2: nvidia-auto-select +1280+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0 {viewportout=1280x720+0+152}" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • Slow draw on some apps and dynamic clocks not working properly with ATI/AMD proprietary drivers

    - by Rakeka
    I've recently purchased a new computer (around July 2010) and I've been having some problems with proprietary video drivers on Linux. The hardware is: Video: ATI/AMD Radeon HD 5870 (XFX HD-587X-ZNFC); Motherboard: Asus P7P55D-E Deluxe; Processor: Intel i5 750; Memory: Kingston Hyperx KHX1600C8D3K2/4GX (2x - 8GB Total); Power Supply: XFX P1-750B-CAG9; There are no overclocks, not even the memories (they are at 1333mhz due processor memory controller limitation). The operational system is a homebrew Linux distribution with the following software: Architecture: x86_64 (multilib) Kernel: 2.6.35.10 Xorg: 7.5 Window Manager: wmii-3.9.2 Video Driver: ATI/AMD Catalyst 10.12 There are no desktop effects programs like compiz fusion or beryl. The problems: With ATI/AMD proprietary driver, some applications are with slow draw/redraw, and, the same applications make the driver to increase the card clocks to maximum (0% gpu activity, only the clocks are increased). I dunno exactly how to describe the slow draw but I'll list some applications and symptoms. xterm Flickers a lot when drawing continuous output; When I'm in a workspace with fullscreen xterm, The gpu load stays at 12% in idle, and, with smaller xterm, smaller GPU load. "aticonfig --odgc" output: Default Adapter - ATI Radeon HD 5800 Series Core (MHz) Memory (MHz) Current Clocks : 157 300 Current Peak : 850 1200 Configurable Peak Range : [600-900] [900-1300] GPU load : 12% "aticonfig --pplib-cmd 'get activity'" output: Current Activity is Core Clock: 157MHZ Memory Clock: 300MHZ VDDC: 950 Activity: 12 percent Performance Level: 0 Bus Speed: 5000 Bus Lanes: 16 Maximum Bus Lanes: 16 More examples: mplayer time info flickers on terminal; "find /" flickers a lot (It takes some time to stop with control-c. But, If I change the workspace or put some window upon it, just after the control-c, it stops instantly); "cat somefile" if the file is big (Xorg.0.log for example) it takes some time to display; vim and less (ex: find / | less) don't have much problems, just a little flicker when scrolling; mplayer (no gui) Slow reproduction and seek with -vo x11; Tearing with -vo xv; Time info flickers on terminal (xterm consequence); gvim A little slow draw when scrolling with page up/page down; Firefox Slow draw/redraw on some pages like www.boadica.com.br and sometimes on www.youtube.com with flash enable (never noticed on many pages); Corruptions when informative yellow boxes are showing and scroll the page (an gray box appears at the same place of the informative box); "Wallpaper" After minimizing a fullscreen window or changing to an empty workspace it takes some time to redraw wallpaper. "Video Card" The core and memory clocks are increased with the events described above and on other situations like change workspace (even without wallpaper), minimize, maximize or move a window; Idle clocks: Core: 157mhz, Memory: 300mhz Full clocks: Core: 850mhz, Memory: 1200mhz xpdf Painful slow scrolling; display (from ImageMagick) Slow menus and sometimes slow image redraw; Programs that I use and are apparently without problems: gimp; pidgin; mplayer (-vo gl, gl2); blender; unigine heaven (better fps than on Windows); doom3; tibia; penumbra overture; amnesia the dark descent (wine); diablo 2 (wine); No problems on Windows (Windows 7 Ultimate 64bit). And special note to this: Full desktop effects from Debian and Ubuntu gnome appearance cpanel don't cause ANY problems, even the core and memory clocks don't increase when change workspace, minimize, maximize or move a window. What I've tested: Unsuccessful tests: Tested all drivers versions since 10.6 (released approximately when I've installed the first slackware in this PC); Tested other video card - ATI/AMD Radeon HD 5570 (XFX HD-557X-ZHF2); Tested some options on xorg.conf and that I've found googling (some of these options are commented on my xorg.conf. I'll send the links at the end of post); Tested some patches like 107_fedora_dont_fill_bg_none.patch and xserver-xorg-backclear.patch from Arch Linux Catalyst page (https://wiki.archlinux.org/index.php/ATI_Catalyst); Tested other distros and software versions: Tested XORG-7.6 on my own distribution; Tested Debian Squeeze (testing - from 2010-12-20); Tested Ubuntu Marverick (10.10); Tested Slackware 13.1; Distros info: Architecture: i386 Debian and Ubuntu with all default software (kernel, gnome, xorg, drivers); Slackware with Catalyst from AMD page and default window managers like: fvwm, xfce, and my own build of wmii; Successful tests: Tested other video card (only on my homebrew distro) - NVIDIA Geforce 7300GS with driver 260.19.29; That didn't shown the slow draw problems, but that card is a bit obsolete, so, dunno if that lacks features like the dynamic clocks. I don't dispose of other video cards like nvidia g/gt/gts/gtx 200~400~500 or Radeon HD 3000/4000/6000 to make more tests. Tested other hardware: Video: ATI/AMD Radeon HD 5570 (XFX HD-557X-ZHF2); Motherboard: Intel DG31PR; Processor: Core 2 Duo E6750; Software for that hardware: Fresh install of same distros (except for the mine) with same program versions; That video card (HD 5570) were full time at the maximum clocks (something like 500/750, don't remember) in all the operational systems (Windows XP and Windows 7 too), but it didn't shown the same problems that I have here. I've googled a lot about common problems with ATI/AMD proprietary drivers for Linux and didn't find similar problems, except by the Firefox corruptions, that the solutions were to disable ATI Direct2DAccel and use XAA. With XAA the problems persists and the other applications like pidgin and rest of Firefox showed the same problems of slow draw/redraw. Open source Drivers: With open source drivers (xf86-video-ati-6.13.2) I hadn't the same slow draw problems, but, had other problems, that, for now, make it no viable solution. I'll not discuss it here because this is another line of problems and will confuse everything. If it happens to be the only solution, I'll make another thread to discuss it. Logs and Configs: kernel .config dmesg xorg package list xorg.conf Xorg.0.log

    Read the article

  • High volume SVM (machine learning) system

    - by flyingcrab
    I working on a possible machine learning project that would be expected to do high speed computations for machine learning using SVM (support vector machines) and possibly some ANN. I'm resonably comfortable working on matlab with these, but primarly in small datasets, just for experimentation. I'm wondering if this matlab based approach will scale? or should i be looking into something else? C++ / gpu based computing? java wrapping of the matlab code and pushing it onto app engine? Incidentally, there seems to be a lot fo literature on GPUs, but not much on how useful they are on machine learning applications using matlab, & the cheapest CUDA enlabled GPU money can buy? is it even worth the trouble?

    Read the article

  • I want to run the examples of QtOpenCl, i.e. Qt in OpenCl. Installation and setup help?

    - by Skkard
    The thing is I have to run the OpenCl examples, as given here:http://labs.trolltech.com/blogs/2010/04/07/using-opencl-with-qt/. The problem is that I have no clue where to start. I downloaded the source for QtOpenCl but it needs a valid OpenCl installation. I have Qt installed already. How do I install OpenCl? I don't have a GPU at home unfortunately, and need to implement it on my CPU for now. I have to later give a presentation where I will be supplied a system with a GPU. How do I go about installing OpenCl? Thanks you.

    Read the article

  • Untrusted GPGPU code (OpenCL etc) - is it safe? What risks?

    - by Grzegorz Wierzowiecki
    There are many approaches when it goes about running untrusted code on typical CPU : sandboxes, fake-roots, virtualization... What about untrusted code for GPGPU (OpenCL,cuda or already compiled one) ? Assuming that memory on graphics card is cleared before running such third-party untrusted code, are there any security risks? What kind of risks? Any way to prevent them ? (Possible sandboxing on gpgpu or other technique?) P.S. I am more interested in gpu binary code level security rather than hight-level gpgpu programming language security (But those solutions are welcome as well). What I mean is that references to gpu opcodes (a.k.a machine code) are welcome.

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • openmp vs opencl for computer vision

    - by user1235711
    I am creating a computer vision application that detect objects via a web camera. I am currently focusing on the performance of the application My problem is in a part of the application that generates the XML cascade file using Haartraining file. This is very slow and takes about 6days . To get around this problem I decided to use multiprocessing, to minimize the total time to generate Haartraining XML file. I found two solutions: opencl and (openMp and openMPI ) . Now I'm confused about which one to use. I read that opencl is to use multiple cpu and GPU but on the same machine. Is that so? On the other hand OpenMP is for multi-processing and using openmpi we can use multiple CPUs over the network. But OpenMP has no GPU support. Can you please suggest the pros and cons of using either of the libraries.

    Read the article

  • Lag spikes at full CPU usage, lagy mouse, maybe video card

    - by Roberts
    My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now 01.04.2012 (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can). Update 02.04.2012 - 7:21 Forgot one thing, my hard disk is extrimly slow and I will upgrade it this week or next week so I will be installing same OS again. I am multi-tasker but I can't do much because of 1.8 GHz CPU and slow hard drive (Model ID - WDC WD800JD-60JRC0). The Windows Experience Index is back to normal. Actually "Spelu grafika" (Gaming graphics) are higher than month ago. During this test mouse was very lagy, but month ago there weren't any problems. WHY!?

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

  • Unity3D draw call optimization : static batching VS manually draw mesh with MaterialPropertyBlock

    - by Heisenbug
    I've read Unity3D draw call batching documentation. I understood it, and I want to use it (or something similar) in order to optimize my application. My situation is the following: I'm drawing hundreds of 3d buildings. Each building can be represented using a Mesh (or a SubMesh for each building, but I don't thing this will affect performances) Each building can be textured with several combinations of texture patterns(walls, windows,..). Textures are stored into an Atlas for optimizaztion (see Texture2d.PackTextures) Texture mapping and facade pattern generation is done in fragment shader. The shader can be the same (except for few values) for all buildings, so I'd like to use a sharedMaterial in order to optimize parameters passed to the GPU. The main problem is that, even if I use an Atlas, share the material, and declare the objects as static to use static batching, there are few parameters(very fews, it could be just even a float I guess) that should be different for every draw call. I don't know exactly how to manage this situation using Unity3D. I'm trying 2 different solutions, none of them completely implemented. Solution 1 Build a GameObject for each building building (I don't like very much the overhead of a GameObject, anyway..) Prepare each GameObject to be static batched with StaticBatchingUtility.Combine. Pack all texture into an atlas Assign the parent game object of combined batched objects the Material (basically the shader and the atlas) Change some properties in the material before drawing an Object The problem is the point 5. Let's say I have to assign a different id to an object before drawing it, how can I do this? If I use a different material for each object I can't benefit of static batching. If I use a sharedMaterial and I modify a material property, all GameObjects will reference the same modified variable Solution 2 Build a Mesh for every building (sounds better, no GameObject overhead) Pack all textures into an Atlas Draw each mesh manually using Graphics.DrawMesh Customize each DrawMesh call using a MaterialPropertyBlock This would solve the issue related to slightly modify material properties for each draw call, but the documentation isn't clear on the following point: Does several consecutive calls to Graphic.DrawMesh with a different MaterialPropertyBlock would cause a new material to be instanced? Or Unity can understand that I'm modifying just few parameters while using the same material and is able to optimize that (in such a way that the big atlas is passed just once to the GPU)?

    Read the article

  • Silverlight Cream for March 20, 2010 -- #815

    - by Dave Campbell
    In this Issue: Andy Beaulieu(-2-, -3-), Alex Golesh, Damian Schenkelman, Adam Kinney(-2-), Jeremy Likness, Laurent Bugnion, and John Papa. Shoutouts: Adam Kinney has a good summary up of where to go for all the tools and toys: Install checklist for Silverlight 4 RC, Blend 4 Beta and Windows Phone Developer tools from MIX10 ... tons of links Laurent Bugnion had a few announcements at MIX10: MVVM Light V3 released at #MIX10, and he followed that with What’s new in MVVM Light V3 ... now for Windows Phone! Laurent Bugnion also has announced Sample code for my #mix10 talk online From SilverlightCream.com: Physics Games in Silverlight on Windows Phone 7 Andy Beaulieu has the Physics Helper working for WP7 already... read his post, check out all the links and get going on something fun... was great seeing you at MIX, too, Andy! Silverlight 4: GPU Accelerated PlaneProjection Andy Beaulieu has a comparison up of Plane Projection with and without the new GPU acceleration... be sure to read his notes section. Silverlight 4 PathListBox for Motion Path Animation Have you heard of the PathListBox? Well, showing is better than telling, so check out Andy Beaulieu's post on it Silverlight at Windows Phone 7 Alex Golesh has a quick overview on developing a Windows Phone 7 app in Silverlight using the new toys, and executiting it in the emulator Prism v2.1: Creating a Region Adapter for the Accordion control Damian Schenkelman shows how to use the Accordian control from the toolkit as a region in a Prism app. Expression Blend 4 Beta Feature Overview available for download Adam Kinney announced the presence of an Expression Blend whitepaper as well... you should go grab that too .toolbox – Free online Silverlight and Expression Blend training Want to improve your Silverlight chops or gain some Expression Blend chops? Check out .toolbox post that Adam Kinney posted Introducing the Visual State Aggregator Jeremy Likness describes the basic panel A/panel B problem, describes ways he and other folks have flipped between them, then describes his Visual State Aggregator ... and it's downloadable for you to give it a dance! Multithreading in Windows Phone 7 emulator: A bug Laurent Bugnion found a bug wit multi-threading on the Windows Phone emulator. He confirmed this with the team, and has a workaround you'll be needing... thanks Laurent. Silverlight Overview - Technical Whitepaper John Papa has reiterated the existence of this Silverlight 4 whitepaper ... it was updated this week, and we all should be aware of it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Ubuntu doesn't let me to blacklist modules

    - by László Monda
    I'd like to blacklist a couple of modules, namely drm, drm_kms_helper, i2c_algo_bit and i915 to not allow my integrated Intel GPU to ever be used, but instead my Nvidia MXM card. I inserted the following lines into /etc/modprobe.d/blacklist.conf: blacklist i915 blacklist drm blacklist drm_kms_helper blacklist i2c_algo_bit Despite the above right after rebooting my laptop and typing lsmod I can see these modules loaded. Why does my blacklist get utterly disrespected and what can I do about it?

    Read the article

  • TechEd 2012 - day 3

    - by Stefan Barrett
    The content has got more useful for me as a developer, and I've now seen 2 things which I think will make a big difference: Fake in vs2012 - allows me to stub or fake out libraries making unit testing easier/possible. C++ AMP & auto - auto might get me to start using c++ again (it makes code like for each much nicer/easier to write), while AMP is something I want to play with (moves the processing onto the GPU) The food got a little better, while there was less sign of the snacks.

    Read the article

  • Google Earth freezes during zoom-in on Intel processors with integrated graphics

    - by zigma80
    When I zoom in at a certain zoom level, Google Earth makes my system freeze completely so that I have to power off or reboot I use Kubuntu 12.04 and my laptop has an Intel(R)Core(TM)i3-2310M CPU @2.10GHz with HD3000 graphics. I installed intel-gpu-tools and tried to fix it with sudo intel_reg_write 0x2120 0x1206800 as explained [here][1], but that didn't work. I wonder if there any other solution out there...

    Read the article

  • Modules loading despite being added to the blacklist

    - by László Monda
    I'd like to blacklist a couple of modules, namely drm, drm_kms_helper, i2c_algo_bit and i915 to forbid my integrated Intel GPU to be used, but to use my Nvidia MXM card instead. I inserted the following lines into /etc/modprobe.d/blacklist.conf: blacklist i915 blacklist drm blacklist drm_kms_helper blacklist i2c_algo_bit Despite the above right after rebooting my laptop and typing lsmod I can see these modules loaded. Why does my blacklist get utterly disrespected and what can I do about it?

    Read the article

  • How can I compile SM 3.0 effects in D3D11 in slimdx?

    - by jacker
    var bytecode = ShaderBytecode.CompileFromFile("shaders\\testShader.fx", "fx_5_0", ShaderFlags.None, SlimDX.D3DCompiler.EffectFlags.None, null, null, out str); var effect = new SlimDX.Direct3D11.Effect(gpu.Device, bytecode); Works fine but if I try to use another shader model like 4.0 or 3.0 it throws an error on the new effect creation: E_FAIL: An undetermined error occurred (-2147467259) How do I compile older shaders? And I've read about device context but I can't find any information on how to use them to maintain DX9 compatibility.

    Read the article

  • Unable to boot Ubuntu 13.10 (nVidia GTX 770m and Intel HD 4600)

    - by Raziel Gonzalez
    Ever since I bought this laptop I've been trying to install Ubuntu on it. It came with W8 preinstalled. Up to this point, I've been able to boot in UEFI mode with a black screen. I can tell it's trying to use the nVidia card (there's a led on the computer, depending on the color you can tell which GPU is using) and if I press crtl+alt+F1 I can go to console mode. Taking this advantage I tried to install bumblebee and after a successful install the led that indicates which GPU is being used change, indicating that it switched to the Intel HD 4600 graphics. After the installation I tried to initiate the graphic interface (startx) with no success. Xorg.0.log shows the error: [ 3706.779] X.Org X Server 1.14.3 Release Date: 2013-09-12 [ 3706.782] X Protocol Version 11, Revision 0 [ 3706.783] Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu [ 3706.783] Current Operating System: Linux ubuntu 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 [ 3706.783] Kernel command line: BOOT_IMAGE=/casper/vmlinuz.efi file=/cdrom/preseed/ubuntu.seed boot=casper nomodeset -- [ 3706.785] Build Date: 15 October 2013 09:23:37AM [ 3706.786] xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 3706.786] Current version of pixman: 0.30.2 [ 3706.788] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 3706.788] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 3706.791] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 2 12:28:52 2013 [ 3706.792] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 3706.792] (==) No Layout section. Using the first Screen section. [ 3706.792] (==) No screen section available. Using defaults. [ 3706.792] (**) |-->Screen "Default Screen Section" (0) [ 3706.792] (**) | |-->Monitor "<default monitor>" [ 3706.792] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 3706.792] (==) Automatically adding devices [ 3706.792] (==) Automatically enabling devices [ 3706.792] (==) Automatically adding GPU devices [ 3706.792] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 3706.792] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 3706.792] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 3706.792] (II) Loader magic: 0x7ff680918d20 [ 3706.792] (II) Module ABI versions: [ 3706.792] X.Org ANSI C Emulation: 0.4 [ 3706.792] X.Org Video Driver: 14.1 [ 3706.792] X.Org XInput driver : 19.1 [ 3706.792] X.Org Server Extension : 7.0 [ 3706.793] (--) PCI:*(0:0:2:0) 8086:0416:1462:10e8 rev 6, Mem @ 0xf7400000/4194304, 0xb0000000/268435456, I/O @ 0x0000f000/64 [ 3706.793] (II) Open ACPI successful (/var/run/acpid.socket) [ 3706.794] Initializing built-in extension Generic Event Extension [ 3706.795] Initializing built-in extension SHAPE [ 3706.796] Initializing built-in extension MIT-SHM [ 3706.797] Initializing built-in extension XInputExtension [ 3706.797] Initializing built-in extension XTEST [ 3706.798] Initializing built-in extension BIG-REQUESTS [ 3706.799] Initializing built-in extension SYNC [ 3706.799] Initializing built-in extension XKEYBOARD [ 3706.800] Initializing built-in extension XC-MISC [ 3706.801] Initializing built-in extension SECURITY [ 3706.802] Initializing built-in extension XINERAMA [ 3706.802] Initializing built-in extension XFIXES [ 3706.803] Initializing built-in extension RENDER [ 3706.804] Initializing built-in extension RANDR [ 3706.804] Initializing built-in extension COMPOSITE [ 3706.805] Initializing built-in extension DAMAGE [ 3706.806] Initializing built-in extension MIT-SCREEN-SAVER [ 3706.806] Initializing built-in extension DOUBLE-BUFFER [ 3706.807] Initializing built-in extension RECORD [ 3706.807] Initializing built-in extension DPMS [ 3706.808] Initializing built-in extension X-Resource [ 3706.809] Initializing built-in extension XVideo [ 3706.809] Initializing built-in extension XVideo-MotionCompensation [ 3706.810] Initializing built-in extension SELinux [ 3706.811] Initializing built-in extension XFree86-VidModeExtension [ 3706.811] Initializing built-in extension XFree86-DGA [ 3706.812] Initializing built-in extension XFree86-DRI [ 3706.812] Initializing built-in extension DRI2 [ 3706.812] (II) "glx" will be loaded by default. [ 3706.812] (WW) "xmir" is not to be loaded by default. Skipping. [ 3706.812] (II) LoadModule: "dri2" [ 3706.812] (II) Module "dri2" already built-in [ 3706.812] (II) LoadModule: "glamoregl" [ 3706.813] (II) Loading /usr/lib/xorg/modules/libglamoregl.so [ 3706.813] (II) Module glamoregl: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.2.901, module version = 0.5.1 [ 3706.813] ABI class: X.Org ANSI C Emulation, version 0.4 [ 3706.813] (II) LoadModule: "glx" [ 3706.813] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 3706.813] (II) Module glx: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.3, module version = 1.0.0 [ 3706.813] ABI class: X.Org Server Extension, version 7.0 [ 3706.813] (==) AIGLX enabled [ 3706.814] Loading extension GLX [ 3706.814] (==) Matched intel as autoconfigured driver 0 [ 3706.814] (==) Matched vesa as autoconfigured driver 1 [ 3706.814] (==) Matched modesetting as autoconfigured driver 2 [ 3706.814] (==) Matched fbdev as autoconfigured driver 3 [ 3706.814] (==) Assigned the driver to the xf86ConfigLayout [ 3706.814] (II) LoadModule: "intel" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 3706.814] (II) Module intel: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.3, module version = 2.99.904 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "vesa" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 3706.814] (II) Module vesa: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 2.3.2 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "modesetting" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 3706.814] (II) Module modesetting: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 0.8.0 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "fbdev" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 3706.815] (II) Module fbdev: vendor="X.Org Foundation" [ 3706.815] compiled for 1.14.1, module version = 0.4.3 [ 3706.815] Module class: X.Org Video Driver [ 3706.815] ABI class: X.Org Video Driver, version 14.1 [ 3706.815] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, HD Graphics, HD Graphics 2000, HD Graphics 3000, HD Graphics 2500, HD Graphics 4000, HD Graphics P4000, HD Graphics 4600, HD Graphics 5000, HD Graphics P4600/P4700, Iris(TM) Graphics 5100, HD Graphics 4400, HD Graphics 4200, Iris(TM) Pro Graphics 5200 [ 3706.815] (II) VESA: driver for VESA chipsets: vesa [ 3706.815] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 3706.815] (II) FBDEV: driver for framebuffer: fbdev [ 3706.815] (--) using VT number 7 [ 3706.819] (WW) Falling back to old probe method for modesetting [ 3706.819] (EE) open /dev/dri/card0: No such file or directory [ 3706.819] (WW) Falling back to old probe method for fbdev [ 3706.819] (II) Loading sub module "fbdevhw" [ 3706.819] (II) LoadModule: "fbdevhw" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 3706.819] (II) Module fbdevhw: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 0.0.2 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "vbe" [ 3706.819] (II) LoadModule: "vbe" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libvbe.so [ 3706.819] (II) Module vbe: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.1.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "int10" [ 3706.819] (II) LoadModule: "int10" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libint10.so [ 3706.819] (II) Module int10: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.0.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) VESA(0): initializing int10 [ 3706.820] (EE) VESA(0): V_BIOS address 0x0 out of range [ 3706.820] (II) UnloadModule: "vesa" [ 3706.820] (II) UnloadSubModule: "int10" [ 3706.820] (II) Unloading int10 [ 3706.820] (II) UnloadSubModule: "vbe" [ 3706.820] (II) Unloading vbe [ 3706.820] (EE) Screen(s) found, but none have a usable configuration. [ 3706.820] (EE) Fatal server error: [ 3706.820] (EE) no screens found(EE) [ 3706.820] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 3706.820] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 3706.820] (EE) [ 3706.827] (EE) Server terminated with error (1). Closing log file. I also saved the dsmeg output to see if it can be of any help. In order to be able to get to this stage I had to boot with nomodeset option and removed quiet and splash. Anyone got this same error? Any guidance? I've tried other linux distros and so far the only one that is able to boot is Opensuse 12.3 without any issues (but only when I switch to legacy mode instead of UEFI).

    Read the article

  • ERROR running Bumblebee on 13.10

    - by paul
    I'm trying to get Bumblebee working again after an upgrade to Saucy. Running software with Optirun gives the following output: optirun nvidia-settings [ 45.697126] [ERROR]Cannot access secondary GPU - error: [XORG] (EE) Failed to load /usr/lib/xorg/modules/libglamoregl.so: /usr/lib/xorg/modules/libglamoregl.so: undefined symbol: _glapi_tls_Context [ 45.697179] [ERROR]Aborting because fallback start is disabled. Does anyone know how to fix this? Thanks! :)

    Read the article

  • Ubuntu 14.04 Nvidia Optimus Bumblebee error

    - by Cristian
    I know that in Ubuntu 14.04 there exists nvidia-prime for Nvidia Optimus, but I don't like it and neither am I able to get it work. After upgrading from Ubuntu 12.04 everything crashed, and I made a clean install of Ubuntu 14.04 and Bumblebee, but now I have new troubles. After running optirun glxgears I get the following error: **[ 4703.996785] [ERROR]Cannot access secondary GPU, secondary X is not active.** **[ 4703.996910] [ERROR]Aborting because fallback start is disabled.** Please help.

    Read the article

  • Generic log analyzer that produces reports

    - by Eugene
    About 600 customers use our application. We have very detailed logs for everything that happens in the application, from changes in the data model, memory and CPU/GPU usage to clicks on the UI elements. We want to be able to parse the logs coming from these customers and analyze them to understand how users use our application and what happens internally in the application. Is there a log analyzer that can produce such reports automatically?

    Read the article

  • bumblebee does not work with metacity and KWin, but works with compiz

    - by cpu2
    If I try to launch something with optirun under compiz, it works. If I try to launch something with optirun under KDE or metacity, it gives me: [ 247.384077] [ERROR]Cannot access secondary GPU - error: [XORG] (EE) [ 247.384117] [ERROR]Aborting because fallback start is disabled. If it matters, I'm trying to launch Portal 2 with wine I have: Nvidia GeForce GT540M with optimus Acer Aspire Timeline X Intel core i5 and 3000 Integrated Graphics

    Read the article

  • Running C++ AMP kernels on the CPU

    - by Daniel Moth
    One of the FAQs we receive is whether C++ AMP can be used to target the CPU. For targeting multi-core we have a technology we released with VS2010 called PPL, which has had enhancements for VS 11 – that is what you should be using! FYI, it also has a Linux implementation via Intel's TBB which conforms to the same interface. When you choose to use C++ AMP, you choose to take advantage of massively parallel hardware, through accelerators like the GPU. Having said that, you can always use the accelerator class to check if you are running on a system where the is no hardware with a DirectX 11 driver, and decide what alternative code path you wish to follow.  In fact, if you do nothing in code, if the runtime does not find DX11 hardware to run your code on, it will choose the WARP accelerator which will run your code on the CPU, taking advantage of multi-core and SSE2 (depending on the CPU capabilities WARP also uses SSE3 and SSE 4.1 – it does not currently use AVX and on such systems you hopefully have a DX 11 GPU anyway). A few things to know about WARP It is our fallback CPU solution, not intended as a primary target of C++ AMP. WARP stands for Windows Advanced Rasterization Platform and you can read old info on this MSDN page on WARP. What is new in Windows 8 Developer Preview is that WARP now supports DirectCompute, which is what C++ AMP builds on. It is not currently clear if we will have a CPU fallback solution for non-Windows 8 platforms when we ship. When you create a WARP accelerator, its is_emulated property returns true. WARP does not currently support double precision.   BTW, when we refer to WARP, we refer to this accelerator described above. If we use lower case "warp", that refers to a bunch of threads that run concurrently in lock step and share the same instruction. In the VS 11 Developer Preview, the size of warp in our Ref emulator is 4 – Ref is another emulator that runs on the CPU, but it is extremely slow not intended for production, just for debugging. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Why is my card Unity blacklisted with all the requirements fulfilled?

    - by Oxwivi
    The following is the Unity test output: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.30 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity supported: no As you can see, all requirements are fulfilled but my GPU is blacklisted. What can I do about it?

    Read the article

  • Sortie des spécifications d'OpenCL 1.2 : séparation compilation/linkage, partitionnement et support de nouveaux types de périphériques

    Sortie des spécifications d'OpenCL 1.2 Séparation compilation/linkage, partitionnement et support de nouveaux types de périphérique Le groupe Khronos vient de ratifier et publier les spécifications d'OpenCL 1.2 (Open Computing Language), l'API et extension standardisée du langage C pour supporter le développement sur GPU et la programmation parallèle distribuée sur plusieurs types de processeurs compatibles. Parmi les nouveautés de cette version, citons : Le partitionnement des périphériques permet de diviser un périphérique en plusieurs sous-périphériques pour contrôler directement les tâches assignées à chaque unité de calcul ; Séparation de la compilation et ...

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >