Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 117/283 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Firefox 15 hangs with Ubuntu kernel update

    - by Marty
    I recently ran updates and it told me that in order to get those updates I had to update my kernel. I did that and also updated Firefox to 15. Since then Firefox hangs/gray screens sites I go to. This lasts anywhere from 5-10 seconds to 2-3 minutes. I have restarted Firefox with all add-ons disabled but it still did the same thing. I found a bug report on Launchpad that sounded like what was happening with me, but I haven't received any error codes, just the hanging/frozen screens. Also it seems that it ups my CPU making the rest of Ubuntu lag while Firefox is hung. I would guess the cause is a conflict between the updated kernel and the updated Firefox, but I'm still fairly new at Ubuntu and not sure where to go from here. Is there anything else to try? My Toshiba laptop specs are: Ubuntu 12.04 (32 bit) Linux 3.2.0-30-generic-pae #48-Ubuntu SMP Fri Aug 24 17:14:09 UTC 2012 i686 i686 i386 GNU/Linux Firefox 15.0.1 Intel® Pentium(R) Dual CPU T3400 @ 2.16GHz × 2 Mobile Intel® GM45 Express Chipset x86/MMX/SSE2 Thanks!

    Read the article

  • 12.04.1 no audio through HDMI

    - by JoJo
    having a bit of an issue with getting audio to go through HDMI. Here are the base specs: OS: Ubuntu Desktop 12.04.1 x64 CPU: AMD A10-5800K 3.8G 4M FM2 R Mobo: MSI FM2-A75MA-E35 OS: Ubuntu 12.04.1 LTS Vid Card: (integrated on CPU) AMD Radeon HD 7660D HDMI sound works fine under Win7 (after mobo and vid drivers are installed), so it's not physically broken. Audio through the normal headphone jacks works fine under Ubuntu. Looking at the audio panel, there is no HDMI output at all. aplay -l also reports only: card 0: Generic [HD-Audio Generic], device 0: ALC887-VD Analog [ALC887-VD Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 In additional drivers there are two versions: ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER (post-release update) The first installs, but problem persists. I do get more resolutions to pick from. Second version does not, reporting it failed installation and to find details at: /var/log/jockey.log Looked at the log, and it's insanely long, if necessary I can get it to you guys. Did more research and some had luck by manually installing the drivers, so tried to give that a shot by following this: https://help.ubuntu.com/community/BinaryDriverHowto/ATI#Manually_installing_Catalyst_12.6 starting at 3.1 Manually installing Catalyst 12.6. I immediately had 2 issues, (1) the AMD website does not provide any drivers for Linux, and (2) the following command did not work: sudo sh amd-driver-installer-12-6-x86.x86_64.run --buildpkg Ubuntu/precise sh: 0: Can't open amd-driver-installer-12-6-x86.x86_64.run Some other posts stated to update "alsa-drivers", but that also did not work as install command for the new version of them did not work. I forget the exact issue, but similar to above, cannot open / cannot find. Any help would be greatly appreciated!

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Can windows XP be better than any Ubuntu (and Linux) distro for an old PC?

    - by Robert Vila
    The old laptop is a Toshiba 1800-100: CPU: Intel Celeron 800h Ram 128 MB (works ok) HDD: 15GB (works ok) Graphics adapter: Integrated 64-bit AGP graphics accelerator, BitBIT, 3D graphic acceleration, 8 MB Video RAM Only WindowsXP is installed, and works ok: it can be used, but it is slow (and hateful). I thought that I could improve performance (and its look) easily, since it is an old PC (drivers and everything known for years...) by installing a light Linux distro. So, I decided to install a light or customized Ubuntu distro, or Ubuntu/Debian derivative, but haven't been successful with any; not even booting LiveCDs: not even AntiX, not even Puppy. Lubuntu wiki says that it won't work because the last to releases need more ram (and some blogs say much more cpu -even core duo for new Lubuntu!-), let alone Xubuntu. The problems I have faced are: 1.There are thousands of pages talking about the same 10/15 lightweight distros, and saying more or less the same things, but NONE talks about a simple thing as to how should the RAM/swap-partition proportion be for this kind of installations. NONE! 2.Loading the LiveCD I have tried several different boot options (don't understand much about this and there's ALWAYS a line of explanation missing) and never receive error messages. Booting just stops at different stages but often seems to stop just when the X server is going to start. I am able to boot to command line. 3.I ignore whether the problem is ram size or a problem with the graphics driver (which surprises me because it is a well known brand and line of computers). So I don't know if doing a partition with a swap partition would help booting the LiveCD. 4.I would like to try the graphical interface with the LiveCD before installing. If doing the swap partition for this purpose would help. How can I do the partition? I tried to use Boot Rescue CD, but it advises me against continuing forward. I would appreciate any ideas as regards these questions. Thank you

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • ATI Catalyst driver 12.8 is not using hardware acceleration on Precise

    - by Jack Wright
    I've been using Ubuntu and ATI Catalyst for years. On my clean install of Ubuntu 12.04 I've noticed that Catalyst 12.6 and then 12.8 are not actually using my HD5750 GPU for hardware acceleration - high CPU usage, zero GPU load. Everything installed correctly with no hassles, fglrxinfo and vainfo are correct as per this HowTo for Precise. I have an Ubuntu 10.04 with Catalyst 12.6 installation on the same hardware which does use the GPU - low CPU usage, high GPU load when transcodeing video files or playing video content. The VA-API drivers are not installed on the 10.04 build. They are not mentioned in this HowTo for Lucid. fgl_glxgears frame rates on Precise are a fifth of the rates on Lucid. LUCID jw@Kworld:~$ fgl_glxgears Using GLX_SGIX_pbuffer 16867 frames in 5.0 seconds = 3373.400 FPS 12523 frames in 5.0 seconds = 2504.600 FPS 13763 frames in 5.0 seconds = 2752.600 FPS PRECISE jw@NewWorld12:~$ fgl_glxgears Using GLX_SGIX_pbuffer 12905 frames in 5.0 seconds = 2581.000 FPS 3230 frames in 5.0 seconds = 646.000 FPS 517 frames in 5.0 seconds = 103.400 FPS 518 frames in 5.0 seconds = 103.600 FPS 6489 frames in 5.0 seconds = 1297.800 FPS This is glxgears running in fullscreen. In Lucid (10.04) I can't see the gears, they are spinning so fast, but in Precise (12.04) they are really sluggish. Has anyone else noticed a problem like this? Cheers, Jack.

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Animating DOM elements vs refreshing a single Canvas

    - by mgibsonbr
    A few years ago, when the HTML Canvas element was still kinda fresh, I wrote a small game in a rather "unusual" way: each game element had its own canvas, and frequently animated elements even had multiple canvases, one for each animation sprite. This way, the translation would be done by manipulating the DOM position of the canvases, while the sprite animation would consist of altering the visibility of the already drawn canvases. (z-indexes, of course, were the tricky part) It worked like a charm: even in IE6 with excanvas it showed a decent performance, and everything was rather consistent between browsers, including some smartphones. Now I'm thinking in writing a larger game engine in the same fashion, so I'm wondering whether it would be a good idea to do so in the current context (with all the advances in browsers and so on). I know I'm trading memory for time, so this needs to be customizable (even at runtime) for each machine the game will be running. But I believe using separate canvases would also help to avoid the game "freezing" on CPU spikes, since the translation would still happen even if the redraws lag for a while. Besides, the browsers' rendering engines are already optimized in may ways, so I'm guessing this scheme would also reduce the load on the CPU (in contrast to doing everything in JavaScript - specially the less optimized ones). It looks good in my head, but I'd like to hear the opinion of more experienced people before proceeding further. Is there any known drawback of doing this? I'm particulartly unexperienced in dealing with the GPU, so I wonder whether this "trick" would nullify any benefit of using a single, big canvas. Or maybe on modern devices it's overkill (though I'm skeptic about the claims that canvas+js - especially WebGL - will ever be a good alternative to native code). Any thoughts?

    Read the article

  • OVM Server for SPARC Enhancements

    - by Owen Allen
    Oracle VM Servers for SPARC saw a few improvements in Ops Center 12.2. In addition to brownfield support, we've made a number of enhancements to let you add OVM Servers for SPARC to a Server Pool and enable migration of their guests. -When you discover an OVMSS Control Domain and manage it with an Ops Center Agent, its guests are automatically discovered as well. The guest metadata is initially put in the local metadata library in the /guests directory, but you can move it from one library to another to enable migration.-Once you've discovered an OVMSS control domain, you can add it to a server pool, even if it's already configured and running logical domains. Even if live migration between OVMSS systems isn't possible due to CPU incompatibilities, you can still put them in a server pool together and enable guest recovery by configuring the CPU architecture of the guest domain as generic. -You can mark a guest's storage as shared to indicate that it's available to other managed OVM Server systems with the same back-end name. This lets you use storage not fully managed in an Ops Center library as part of guest migration. Put together, these enhancements make it much easier to manage and maintain OVMSS guests.

    Read the article

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

  • Oracle????????????FAQ

    - by Yusuke.Yamamoto
    ??? 1.??????????? ????????????????????????????????????????????? ????? Oracle Database ?SQL????????????????????? 2.?????????????????????? ??????????????????????????????????????? ????????????????????????????? ???:????????????? ???:??????NULL???????? ????:???·????????????????? ??????:CPU????????????I/O??????????? 3.???????????????????????? ????? Oracle Database ???????????????????????????????? ????????????????????????????????????????????????? ??? 4.????????????????????????????????????????? SYS????????????????·????????????????? ????????????????????????? ?????? ? DBA_TABLES ??????? ? DBA_INDEXES ?????? ? DBA_TAB_COLUMNS 5.?????????????????????????????????? ??????????????????????? ??????:Oracle Database ???????????????????????????????????????????????????????? ??????:?????????????????????????????????????? ????????:SQL????????????????????????????????????????????????????????????????? 6.?????????????????????????????????????????????????????? SYS????????????????·????????????????? ???????????????·?????"LAST_ANALYZED"??????????????????????????????????????????? 7.?????SQL?????????????????????SQL?????????????????????????????????? DBMS_STATS ????????????????????????????????????????? ?)??????????????? EXECUTE DBMS_STATS.GATHER_TABLE_STATS('SCOTT','EMP'); ?)?????????????????? EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT'); 8.???????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????? ??????:??????????????????????????????????? ??????:?????????????????????????????????? ????????:????????SQL???????????CPU??????? ???? ?????????????????????? ????????? Part1 ???????????????????·??????|??????????? "

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Fresh install of Ubuntu 12.10 won't boot on Asus X101CH Eee PC

    - by Najdmie
    I did a fresh install of Ubuntu 12.10 in my Asus X101CH Eee PC, using a live usb which I made using startup disk creator, replacing Ubuntu 12.04. The installation ran smoothly, but when I boot, it goes to a purple screen for a second, then a lot of text like the following shows up in sequence: Starting crash report submission daemon [OK] Starting CPU interrupts balancing daemon [OK] Stopping save kernel messages [OK] _ And the cursor just keeps blinking for hours. I can't log in. Pressing Alt + F2 did not bring me to console mode. I thought it might be a partition problem so I formatted the whole disk, by creating a new partition table using gparted in Ubuntu 12.04 live USB. I noticed that I can't try Ubuntu using 12.10 live USB either; it just went to a blank screen when I hit the 'try ubuntu' button. But the same problem arose. I even changed the pen drive for the live USB a couple of times. I happened to know that the Intel Atom N2600 Cedar Trail CPU in my computer is not well supported in Linux, I managed to install its drivers in Ubuntu 12.04, although the computer went blank during the installation.

    Read the article

  • how can I get 32-bit program to run on 64-bit Ubuntu?

    - by Carol
    Sorry to be asking this, but I have read quite a few posts and articles a lot of places wrt the issue I am having, to no avail. I am trying to get a Second Life Viewer (Firestorm) to run, and just keep getting the '64-bit error message' it throws. I have installed every 32-lib I can find, still doesn't work. I think I am surely missing some setting somewhere, or running Firestorm from the wrong place, or something, but I have no idea what. FWIW, Firestorm loads but doesn't behave right in the 32-bit version, either. I have actually tried several linux distros, 32 and 64-bit. Mint 32-bit runs it straight off, and Mint 64-bit throws the '64-bit error'. openSUSE, any version, won't run it at all. Oh, and all the other SL viewers I have tried behave the same way. I am beginning to wonder if my set-up just doesn't like linux. Here is my system info: CPU: Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz (2661 MHz) Memory: 4026 MB OS Version: Linux 3.2.0-29-generic-pae #46-Ubuntu SMP Fri Jul 27 17:25:43 UTC 2012 i686 Graphics Card Vendor: ATI Technologies Inc. Graphics Card: ATI Radeon HD 5700 Series I appreciate any help anyone can give me! Thanks so much! Carol :)

    Read the article

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Copying photos from camera stalls - how to track down issue?

    - by Hamish Downer
    When I copy files from my camera (connected via USB) to the SSD in my laptop a few files get copied and then the copy stalls. I'm not sure why, any ideas or things to investigate appreciated, or bug reports to go and look at. I have read this answer - the camera (Canon 40D in case that matters) mounts fine using gvfs. I can see the photos in Nautilus, or in the terminal (in /run/user/username/gvfs/... ) and I can copy a few photos, but not many. Using the terminal or Nautilus the process hangs until the camera goes to sleep. Digikam fails to copy any at all, as does Rapid Photo Downloader. Shotwell did manage it in the end, but that is very much a work around for me. I have disabled thumbnail generation by nautilus. Load average stays about 1 while this is happening, while CPU usage is half idle, half wait (and a little user/sys for other programs). None of the programs at the top of the cpu list in top are related to copying photos. There is not much in the logs - from /var/log/syslog Dec 2 16:20:52 mishtop dbus[945]: [system] Activating service name='org.freedesktop.UDisks' (using servicehelper) Dec 2 16:20:52 mishtop dbus[945]: [system] Successfully activated service 'org.freedesktop.UDisks' Dec 2 16:21:24 mishtop kernel: [ 2297.180130] usb 2-2: new high-speed USB device number 4 using ehci_hcd Dec 2 16:21:24 mishtop kernel: [ 2297.314272] usb 2-2: New USB device found, idVendor=04a9, idProduct=3146 Dec 2 16:21:24 mishtop kernel: [ 2297.314278] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Dec 2 16:21:24 mishtop kernel: [ 2297.314283] usb 2-2: Product: Canon Digital Camera Dec 2 16:21:24 mishtop kernel: [ 2297.314287] usb 2-2: Manufacturer: Canon Inc. Dec 2 16:21:24 mishtop mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Dec 2 16:21:24 mishtop mtp-probe: bus: 2, device: 4 was not an MTP device This problem has only started recently and I've had all the hardware for ages. I have also recently upgraded to 12.10, though I'm not sure if the problem started when I upgraded or after the upgrade. I also note this similar question but it is currently unanswered and I'm providing more detail

    Read the article

  • Getting the PC speaker to beep

    - by broiyan
    There has been much written on getting the beep sound from Ubuntu releases over the years. Example: fixing the beep My needs are slightly different in that I do not want to ensure sound card beeps are functioning. Instead, I want PC speaker beeps, the kind produced by the original built-in speaker because I believe they will produce less CPU load. I have confirmed that my computer has the PC speaker by unplugging the external speakers and shutting down Ubuntu. At some point in the shutdown and restart process a beep is heard even though the external speakers have no power. I have tried the following: In /etc/modprobe.d/blacklist.conf, turn these lines into comments: #blacklist snd_pcsp #blacklist pcspkr In .bashrc /usr/bin/xset b on /usr/bin/xset b 100 Enable in the gnome terminal: Edit Profile Prefs General Terminal Bell Ensure no "mute" selections in: System Prefs Sound various tabs (uncheck them all). Select "Enable window and button sounds" in: System Prefs Sound Sound Effects In gconf-editor desktop gnome sound, select the three sound check boxes. In gconf-editor apps metacity general select the audible bell check box. Still I get no PC speaker beeps when I send code 7 to the console via my Java program or use echo -e '\a' on the bash command line. What else should I try? Update Since my goal is to minimize load on the CPU, here is a comparison of elapsed times. Each test is for 100,000 iterations. Each variant was performed three times so three results are presented for each. printwriter.format("%c", 7); // 1.3 seconds, 1.5 seconds, 1.5 seconds Toolkit.getDefaultToolkit().beep(); // 0.8 seconds, 0.3 seconds, 0.5 seconds try { Runtime.getRuntime().exec("beep"); } catch (IOException e) { } // 10.3 seconds, 16.3 seconds, 11.4 seconds These runs were done inside Eclipse so multiply by some value less than 1 for standalone execution. Unfortunately, Toolkit's beep is silent on my computer and so is code 7. The beep utility works but has the most cost.

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • TraceTune: Larger Files and History

    - by Bill Graziano
    I updated TraceTune over the weekend.  I increased the trace file upload size to 20MB.  We’ve processed over half a million rows of trace data so far and I’m confident this won’t kill the server. I added average CPU and average disk reads to the screen that lists the SQL statements in a trace file. I only added these two.  I’m pretty sure average writes isn’t that import.  I’m still thinking about average duration.  I’m trying to balance showing you what you need with a clean, simple interface.  Plus I have a way to see the averages that I describe further down. TraceTune now keeps the last 10 files that you’ve uploaded and will give you some basic details about each file. I think the last change I made is the most interesting. For each SQL statement, I show the history of that statement. You’ll see each trace file where this statement was found.  It will list the averages for CPU, reads, writes and duration.  This will quickly show you if you’re improving the performance of that query.  In my screen shot above you can that even though the execution counts are very different the averages are consistent. If you want to see what queries are consuming the most resources on your server give TraceTune a try.

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • Howto run jupiter script as superuser in lubuntu-rc.xml?

    - by KamilKrzes
    I'm trying to bind to my asus eee hotkeys couple of jupiter functions to work as on Windows. The problem is that I have to run those as superuser. Under terminal scripts are working fine so I put in my ~/.config/openbox/lubuntu-rc.xml: <keybind key="XF86Launch6"> <action name="Execute"> <command>sudo /usr/lib/jupiter/scripts/cpu-control</command> </action> </keybind> Aaaaaand... It partially works. Some of files to change with this script was changed and other no. Some of the changed one are locked so sudo probably working. I have no idea how to debug this cause I don't know where to find log of this. I'm lil' bit ashamed but I don't know how exactly sudo works. I don't want to put my password every time to change cpu frequency or toggle touchpad so I don't want to use gksu or other sudo gui.

    Read the article

  • opengl libraries for ubuntu running on Virtual Box

    - by vboxuser
    I am having Ubuntu 10.04 running on VirtualBox. Guest additions are installed successfully. But Guest additions do not provide the OpenGL library. To run open GL demo what needs to be installed. Some google links suggest Mesa utils. By my understanding is Mesa utils do not use vbox drivers to achieve hardware acceleration. Ubuntu 10.04 is running on VirtualBox and not on bare metal. The host comprise of Zx400 CPU with nvidia graphics driver. But this is not relevant since I would be using the vbox drivers provided by guest additions on VirtualBox. In this scenario how do I get OpenGL libraries on Ubuntu?. (Specially I am looking for solution other than Mesa, since mesa has support only for VMware and not for VirtualBox) How to get Opengl libraries which use vbox drivers for 3D hardware accleration?? As i already mentioned, ubuntu 10.04 is running on VirtualBox and not on bare metal. the host comprise of Zx400 CPU with nvidia graphics driver. but this is not relevant since i would be using the vbox drivers provided by guest additions on VirtualBox. In this scenario how to i get OpenGL libraries on Ubuntu. (Especially i am looking for solution other than Mesa, since mesa has support only for VMware and not for VirtualBox Thank you

    Read the article

  • Huge spike in traffic tracked by Google Analytics from Safari browsers

    - by Petra Barus
    My site urbanindo.com recently experienced huge spike in traffic tracked by Google Analytics. The huge spike sometime shows in several same pages. This is odd because I rarely experienced that much traffic before. Some pages can have hundreds of visitors at the same time. But when I read the webserver log, those pages only showed up in only one or two entries, not hundreds like the GA showed. But the only similar thing about the entries is that they are using similar browser agent. Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 And when I opened the Google Analytics Audience Technology Browsers & OS and I plotted the chart based on the browsers, I saw that the huge spike came from Safari. This huge spikes started to happen since the beginning of this August, which happens to be when I use multiple webservers behind load balancer (although I'm pretty sure those two are not relevant). Is there something wrong with my GA configuration?

    Read the article

  • Virtualbox install 12.04 guest: "pae not present"

    - by Peter.O
    I get this message while trying to install Ubuntu 12.04 as a guest in VirutalBox 4.1.18, on an Ubuntu 10.04 host. This kernel requires the following feature not present on the CPU: pae Some host specs: The host's kernel is: Linux 2.6.32-41-generic-pae GNU/Linux lscpu (host): Architecture: i686, CPU op-mode(s): 32-bit, 64-bit grep --color=always -i PAE /proc/cpuinfo   does show pae in its output. The 12.04 iso used is: ubuntu-12.04.0-desktop-i386.iso As a comparison/check, I downloaded and installed Linux Mint 13 Cinnamon to the same host on the exact same VM (I just changed the .iso image). It worked fine. Its iso is: linuxmint-13-cinnamon-dvd-32bit.iso It seems (to me) that I have pae.. what is going on here? Update: I had assumed that Linux Mint also required pae (being Ubuntu based), but I've just run;   grep --color=always -i PAE /proc/cpuinfo   in the Mint VM.   It showed no output.   So it seems the issue may lie with VirtualBox.   If that is the case, how can I get Virtualbox into pae mode?

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >