Search Results

Search found 854 results on 35 pages for 'cores'.

Page 21/35 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • HIGH CPU USAGE FROM SUSTEM [on hold]

    - by user195641
    CAN ANYONE EXPLAIN THIS ? IT IS BAD FOR MY CPU ? CAN ANYONE HELP ME ? I DONT KNOW WHAT TO DO ! THERE IS HIGH DELAY WHEN I OPEN A NEW TASK HOWEVER ITS A INTEL CORE DUO EXTREAMA 3.0GH THANKS There are about 100 items monitored with the agent. They are also monitored on other identical hosts where Zabbix agent does not consume so much of CPU. Agents send collected data to Zabbix proxy. The agent configuration is default. The host CPU has 8 cores (2.4 Gz). The smallest time value for monitored items is 60 seconds.

    Read the article

  • What combination of soft to select? (your advice/opinion) [on hold]

    - by Flyer
    I'm thinking of upgrading my server soft along with OS. As of now, my VPS is running on Debian 6 with nginx (1.2.4) - apache (2.2.16). My VPS specs are 1Gb RAM, 2 cores of Intel(R) Xeon(R) CPU E5520 @ 2.27GHz. Now, here is the question. Which combo should I run? - nginx - apache (2.4.x) - PHP-fpm 5.5.x - nginx - apache (2.4.x) - mod_php 5.5.x - apache (2.4.x) - mod_php 5.5.x - apache (2.4.x) - PHP-fpm 5.5.x - nginx - PHP-fpm 5.5.x - nginx - mod_php 5.5.x I would really like some advice/opinion of people who are more experienced than me with these things. It's nothing big. Around 100-200k pageviews per month. I can also provide some screenshots of munin stats if needed.

    Read the article

  • Citrix on ESX 4 U1 - Slow login times

    - by thomps01
    I'm sure they'd be just as slow if not slower using ESX 3 but I'm looking for some assistance. On a physical Citrix server, logins are 1 - 4 seconds. The virtual - 16 - 23 seconds. I'm looking for performance enhancements that I can make to me VMs to try and reduce the login wait times. The hardware is fine (HP BL685 (24 cores, 64GB RAM). And there's nothing pushing it yet. Network 10Gb I'm planning to test the configuration with VMXNET3 tomorrow, but does anyone have a list a best practices I can use when testing?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • Gnome 3 freezes on logon on samsung RV 509

    - by Noufal
    I have a Samsung NP-RV509 A0FIN and I tried to install GNU/Linux with gnome 3.2 on it. I tried Fedora 16, Ubuntu 11.10 and Linux Mint 12 RC, but with no success. All of these freezes upon login into gnome shell. I think it is the problem with graphics driver, so I tried xorg-edgers ppa on my last installation, ie., Linux Mint. I also tried various intel graphics packages listed on Synaptic package manager, but no success again. My device configuration is as follows(obtained from windows 7): More details about my computer Component Details Subscore Base score Processor Intel(R) Pentium(R) CPU P6200 @ 2.13GHz 5.6 4.6 Memory (RAM) 4.00 GB 7.2 Graphics Intel(R) HD Graphics 4.6 Gaming graphics 1562 MB Total available graphics memory 5.2 Primary hard disk 12GB Free (50GB Total) 5.9 Windows 7 Ultimate System -------------------------------------------------------------------------------- Manufacturer SAMSUNG ELECTRONICS CO., LTD. Model RV409/RV509/RV709 Total amount of system memory 4.00 GB RAM System type 32-bit operating system Number of processor cores 2 64-bit capable Yes Storage -------------------------------------------------------------------------------- Total size of hard disk(s) 418 GB Disk partition (C:) 12 GB Free (50 GB Total) Media drive (D:) CD/DVD Disk partition (E:) 526 MB Free (191 GB Total) Disk partition (F:) 101 GB Free (177 GB Total) Graphics -------------------------------------------------------------------------------- Display adapter type Intel(R) HD Graphics Total available graphics memory 1562 MB Dedicated graphics memory 64 MB Dedicated system memory 0 MB Shared system memory 1498 MB Display adapter driver version 8.15.10.2202 Primary monitor resolution 1366x768 DirectX version DirectX 10 Network -------------------------------------------------------------------------------- Network Adapter Realtek PCIe GBE Family Controller Network Adapter Broadcom 802.11n Network Adapter Network Adapter Microsoft Virtual WiFi Miniport Adapter Notes -------------------------------------------------------------------------------- The gaming graphics score is based on the primary graphics adapter. If this system has linked or multiple graphics adapters, some software applications may see additional performance benefits. Any help is appreciated, and thanks in advance.

    Read the article

  • What are the reasons why Clojure is hyped and PicoLisp widely ignored?

    - by Thorsten
    I recently discovered the Lisp family of programming languages, and it's definitely one of the more diverse and widespread families in the programming language world. I like Elisp because that most wonderful tool Emacs is an Elisp interpreter. But I was looking for one more Lisp dialect to learn and thought Clojure would be the obvious choice nowadays - until I discovered the well hidden gem PicoLisp. That must be the most intelligent programming environment I have ever seen, like taking the best ideas from Lisp and Smalltalk and adding performance and practicability - and the beauty of parsimony. There is even an Emacs-mode for it. PicoLisp must be the productivity world champion when it comes to building business applications with database and web-client - and that's a very common task. It seems that throwing more and more hardware cores at your PicoLisp application makes it faster and faster, and the database is very performant anyway. However, reactions to PicoLisp in in general mailing-lists etc. are almost hostile (envy?), and there is absolutely no hype and very little publicity (ie not one book published). Are there real justified reasons for this (except the vast amount of java-libs accessible by Clojure, I know that one)? Or is the mainstream it getting wrong again (see C vs Lisp, Java vs Smalltalk, Windows vs Linux) and will come to the conclusion 10 years later that the JVM was good as in between solution, but a really fast Lisp interpreter on multicore machines is much better and allows much cleaner concepts? PS 1: Please note: I'm not interested in Scheme or any Common Lisp dialect, although they might be fine languages. It's just PicoLisp vs Clojure. PS 2: another thing I like about PicoLisp is its similarity to Elisp in certain aspects (both are descendants from MacLisp?) - it's easier to learn two similar languages. There is so much "dynamic binding bashing" on the web, but two of the most appealing Lisp applications use it.

    Read the article

  • Not Playing Nice Together

    - by David Douglass
    One of the things I’ve noticed is that two industry trends are not playing nice together, those trends being multi-core CPUs and massive hard drives.  It’s not a problem if you keep your cores busy with compute intensive work, but for software developers the beauty of multi-core CPUs (along with gobs of RAM and a 64 bit OS) is virtualization.  But when you have only one hard drive (who needs another when it holds 2 TB of data?) you wind up with a serious hard drive bottleneck.  A solid state drive would definitely help, and might even be a complete solution, but the cost is ridiculous.  Two TB of solid state storage will set you back around $7,000!  A spinning 2 TB drive is only $150. I see a couple of solutions for this.  One is the mainframe concept of near and far storage: put the stuff that will be heavily access on a solid state drive and the rest on a spinning drive.  Another solution is multiple spinning drives.  Instead of a single 2 TB drive, get four 500 GB drives.  In total, the four 500 GB drives will cost about $100 more than the single 2 TB drive.  You’ll need to be smart about what drive you place things on so that the load is spread evenly.  Another option, for better performance, would be four 10,000 RPM 300 GB drives, but that would cost about $800 more than the singe 2 TB drive and would deliver only 1.2 TB of space. All pricing based on Microcenter as of March 14, 2010.

    Read the article

  • Intel graphics driver installer, now the CPU fan is rarely quiet

    - by Space monkey
    I have an Optimus chipset: Intel HD 4000 (i7-3635QM CPU) Geforce 640m I don't care about the NVIDIA card, so I didn't try to install any proprietary drivers for it. So: I was having a choppy+high CPU experience with gnome-shell on Ubuntu 14.04. Only happened when I tried moving a window around quickly. I used the Intel graphics installer hoping that it will fix the problem. It did fix the problem, now there is no choppyness or high CPU when I move windows around. However, there is a new problem now: The fan is rarely quiet, doing barely anything at all will cause the fan to go into loud mode quickly. That happens despite the CPU usage being at just around 4%. This wasn't the case before installing Intel drivers. It would normally only do that if, for example, I'm installing packages or doing something that puts some stress on the CPU. I set all CPU cores to "powersave" using cpufreq-set, but nothing changed. Also on Windows, the fans are really quiet when I'm in powersave mode. I believe they completely shut off for most of the time. I remember the installer giving me a report at the end as to which packages it installed. Unfortunately, I didn't save the report and I don't know where it would have saved it if it did. Any ideas or similar experiences?

    Read the article

  • Dryad and DryadLINQ from MSR

    - by Daniel Moth
    Microsoft Research (MSR) researches technologies, incubates projects which many times result in technology that looks like a ready-to-use product (but it is important to understand that these are not the same as products built by the various… actual product teams here at Microsoft). A very popular MSR project has been DryadLINQ, which itself builds on Dryad. To learn more follow the project pages I just linked to and I also recommend this 1-hour channel 9 video. If you only have 3 minutes, watch this great elevator pitch instead. You can also stay tuned on the official blog, which includes a post that refers to internal adoption e.g by Bing, a quick DryadLINQ code example, and some history on how DryadLINQ generalizes the MapReduce pattern and makes it accessible to regular programmers (see this post and that post). Essentially, the DryadLINQ framework (building on the Dryad runtime) allows developers to re-use their LINQ skills for creating/generating programs that process large multi-gigabyte/terabyte datasets across 100s-1000s of machines. One way to think about it is that just as Parallel LINQ allows LINQ developers to seamlessly use multiple cores from a single process on a single machine, DryadLINQ allows LINQ developers to seamlessly use multiple machines for their data parallel algorithms. In the former scenario the motivation was speed of execution, in the latter it is speed of execution AND processing large datasets that simply don't fit on a single machine. Whenever I hear about execution of parallel code on multiple machines on the Microsoft platform, I immediately think of Windows HPC Server. Indeed Dryad and DryadLINQ were made available for Windows HPC Server and I encourage you to watch the PDC session on this topic: Data-Intensive Computing on Windows HPC Server with the DryadLINQ Framework. Watch this space… Comments about this post welcome at the original blog.

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • Running 64 bit Ubuntu distribution from 32 bit Ubuntu

    - by csg
    Related to this question How do I run qemu with 64bit processor on a 64bit machine?, I'm trying to run latest ubuntu 11.10 64bit distribution under Ubuntu 11.04 32 bit using qemu on a core2duo (64 bit cpu) machine, using following qemu parameters with no success. Error under qemu: "This kernel required an x86-64 CPU, but only detected an i686 CPU. Unable to boot - please use a kernel appropiate for your CPU" Isn't qemu suppose to emulate a 64 bit machine? I think I'm missing something, but I can't figure it out. qemu -cpu (kvm64|core2duo|qemu64) -boot d -cdrom ubuntu-11.10-desktop-amd64.iso qemu-system-x86_64 -boot d -cdrom ubuntu-11.10-desktop-amd64.iso Here is my uname -m i686 Here is my /proc/cpuinfo processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU P8400 @ 2.26GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 4522.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • Installation failed with a Blank Screen

    - by Bear
    Blank Screen, no Fix I've tried works... Hardware conflict maybe? First I tried AMD64 Ubuntu desktop. I got into the boot screen, however selecting install returns 1-2 seconds of code then blank screen no-idle. Then I tried the alternate install. Worked, some additional software installs failed. Go to boot for first time, I see BIOS load then black/blank screen. No flicker or cursor... screen turns off. (I am also having issues installing any 64bit OS legit WIN 7 64 Ult., WinXP 64 PRO ISO, also WinXP 32 Legit, WinXP 64Pro ISO returns BSOD on install. All win 7 builds return with CD/DVD driver error. The only OS that installs is beta build 7000 win 7.) Please help! I am using: BIOS Build = E7696AMS V1.5 HDD = Hitachi HDP725050GLA 500.00 GB (SATA) CDD = Optiarc DVD RW AD-72 (SATA) MOBO = MSI A75MA-G55 AMD A Series Motherboard - Micro ATX, Socket FM1, AMD A75 Chipset, 1866MHz DDR3 (O.C.), SATA 6.0 Gb/s, 8-CH Audio, Gigabit LAN, SuperSpeed USB 3.0, AMD Dual Graphics Ready CPU = AMD A6-Series AD3650WNGXBOX Quad-Core A6-3650 APU - 4MB L2 Cache, 2.6GHz, Socket FM1, Radeon HD 6530D (320 Cores), Dual Graphics Ready, DirectX 11 RAM = Corsair CMZ16GX3M4A1600C9B Vengeance Desktop Memory Kit - 16GB (4x 4GB), PC3-12800, DDR3-1600MHz, 9-9-9-24 CAS Latency, Intel XMP Ready, Unbuffered

    Read the article

  • MS Bing web crawler out of control causing our site to go down

    - by akaDanPaul
    Here is a weird one that I am not sure what to do. Today our companies e-commerce site went down. I tailed the production log and saw that we were receiving a ton of request from this range of IP's 157.55.98.0/157.55.100.0. I googled around and come to find out that it is a MSN Web Crawler. So essentially MS web crawler overloaded our site causing it not to respond. Even though in our robots.txt file we have the following; Crawl-delay: 10 So what I did was just banned the IP range in iptables. But what I am not sure to do from here is how to follow up. I can't find anywhere to contact Bing about this issue, I don't want to keep those IPs blocked because I am sure eventually we will get de-indexed from Bing. And it doesn't really seem like this has happened to anyone else before. Any Suggestions? Update, My Server / Web Stats Our web server is using Nginx, Rails 3, and 5 Unicorn workers. We have 4gb of memory and 2 virtual cores. We have been running this setup for over 9 months now and never had an issue, 95% of the time our system is under very little load. On average we receive 800,000 page views a month and this never comes close to bringing / slowing down our web server. Taking a look at the logs we were receiving anywhere from 5 up to 40 request / second from this IP range. In all my years of web development I have never seen a crawler hit a website so many times. Is this new with Bing?

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • I have a ESXi 5.0 installed and when I am Installing Ubuntu Server 12.04 LTS it is giving an error saying grub installation failed?

    - by Rishee
    I have a ESXi 5.0 installed and when I am Installing Ubuntu Server 12.04 LTS (32 bit) it is giving an error saying grub installation failed? Please check the below screenshot of the error. I have other Ubuntu servers running fine on this esxi server, so I don't think problem is with ESXi. I have 32 GB of ram spare on this ESXi and have given 2 GB of RAM to this 12.0 LTS VM. I have given 2 cores of processor. I have tried supplying different ISO Image to this VM as I thought the 1st image that I downloaded has errors.. But defiantly that's not the case as all 3 different ISO images that I downloaded of Ubuntu server 12.04 LTS (32-bit) can't be corrupt!! Just to make sure that the Image does not have problem I used that Image to install it for testing on stand alone system. it works fine there!! This is a production ESXi server with which I can't play with, how ever I can play whith the Ubuntu Server 12.04 LTS (32-Bit) VM that we have created on that ESXi. I need help on this as soon as possible. go live date of this server is really close. (This Question is already there on Super user and Server Fault.)

    Read the article

  • How can a large, Fortran-based number crunching codebase be modernized?

    - by Dave Mateer
    A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?.

    Read the article

  • Gemalto Mobile Payment Platform on Oracle T4

    - by user938730
    Gemalto is the world leader in digital security, at the heart of our rapidly evolving digital society. Billions of people worldwide increasingly want the freedom to communicate, travel, shop, bank, entertain and work – anytime, everywhere – in ways that are convenient, enjoyable and secure. Gemalto delivers on their expanding needs for personal mobile services, payment security, identity protection, authenticated online services, cloud computing access, eHealthcare and eGovernment services, modern transportation solutions, and M2M communication. Gemalto’s solutions for Mobile Financial Services are deployed at over 70 customers worldwide, transforming the way people shop, pay and manage personal finance. In developing markets, Gemalto Mobile Money solutions are helping to remove the barriers to financial access for the unbanked and under-served, by turning any mobile device into a payment and banking instrument. In recent benchmarks by our Oracle ISVe Labs, the Gemalto Mobile Payment Platform demonstrated outstanding performance and scalability using the new T4-based Oracle Sun machines running Solaris 11. Using a clustered environment on a mid-range 2x2.85GHz T4-2 Server (16 cores total, 128GB memory) for the application tier, and an additional dedicated Intel-based (2x3.2GHz Intel-Xeon X4200) Oracle database server, the platform processed more than 1,000 transactions per second, limited only by database capacity --higher performance was easily achievable with a stronger database server. Near linear scalability was observed by increasing the number of application software components in the cluster. These results show an increase of nearly 300% in processing power and capacity on the new T4-based servers relative to the previous generation of Oracle Sun CMT servers, and for a comparable price. In the fast-evolving Mobile Payment market, it is crucial that the underlying technology seamlessly supports Service Providers as the customer-base ramps up, use cases evolve and new services are launched. These benchmark results demonstrate that the Gemalto Mobile Payment Platform is designed to meet the needs of any deployment scale, whether targeting 5 or 100 million subscribers. Oracle Solaris 11 DTrace technology helped to pinpoint performance issues and tune the system accordingly to achieve optimal computation resources utilization.

    Read the article

  • cpufreq not available 11.10

    - by code shogan
    on 11.04 I had cpufreq working on my "AMD Turion(tm) 64 X2 Mobile Technology TL-50 stepping 02" processors, however now on oneiric cpufreq won't load. The core temperature of my cpu is normally 40 c, but lately it's cooking away at 75-80+ c and the fan is always extremely loud even when cpu usage has at 0.4%. and after this dmesg | grep -i cpu I got: Brought up 2 CPUs Switch to broadcast mode on CPU1 Switch to broadcast mode on CPU0 Switched to NOHz mode on CPU #1 Switched to NOHz mode on CPU #0 ACPI: acpi_idle registered with cpuidle cpufreq-nforce2: No nForce2 chipset. cpuidle: using governor ladder cpuidle: using governor menu powernow-k8: Found 1 AMD Turion(tm) 64 X2 Mobile Technology TL-50 (2 cpu cores) (version 2.20.00) I see something about governors and ladder there, does this mean the OS is able to scale my cpu's or not? If so is there a way I can determine it's working? I saw that for other users that the wrong module had been loaded and by disabling it they were able to get cpufreq loaded. How can I tell what scaling module is loaded? stats: Ubuntu Oneiric 32bit Dell Inspiron 1501

    Read the article

  • Radeon HD 6850 VMware 3D Support?

    - by Matt
    I'm a new Ubuntu user (new to all of linux actually). I've installed Ubuntu 11.10 x64 and have been enjoying it, but I wanted to see how it would perform using VMware for small time gaming since I find dual booting too much of a nuisance to even bother using Ubuntu at all (sorry!). I have an Asus EAH6850 DirectCU Radeon HD 6850 graphics card and I've installed the additional ATI/AMD proprietary FGLRX graphics driver, but when I open a Windows XP 32bit machine I installed through VMware, I get this message: "The GPU driver currently installed on this host may cause issues with VMware products. If you notice any issues please disable the 3D support in the affected virtual machines." I still have 3D capabilities in the VM but they are very very choppy even running the DX tests (the spinning cube). I've seen people on youtube and other forums saying that since the new 3D acceleration in VMware 8 gaming is very possible through VMs (and I've seen them running the DX tests with the spinning cube very smoothly). I'm wondering if my graphics card isn't fully supported or if I have installed it wrong. Also when I check system info (on the host Ubuntu machine) it says "Graphics VESA:BARTS" Should my Radeon HD 6850 be showing up there? The rest of my basic system info i5 2500k 8GB 1600MHz memory Guest is running with access to all 4 cores of processor and 3gb memory assigned.

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • Producer-consumer pattern with consumer restrictions

    - by Dan
    I have a processing problem that I am thinking is a classic producer-consumer problem with the two added wrinkles that there may be a variable number of producers and there is the restriction that no more than one item per producer may be consumed at any one time. I will generally have 50-100 producers and as many consumers as CPU cores on the server. I want to maximize the throughput of the consumers while ensuring that there are never more than one work item in process from any single producer. This is more complicated than the classic producer-consumer problem which I think assumes a single producer and no restriction on which work items may be in progress at any one time. I think the problem of multiple producers is relatively easily solved by enqueuing all work items on a single work queue protected by a critical section. I think the restriction on simultaneously processing work items from any single producer is harder because I cannot think of any solution that does not require each consumer to notify some kind of work dispatcher that a particular work item has been completed so as to lift the restriction on work items from that producer. In other words, if Consumer2 has just completed WorkItem42 from Producer53, there needs to be some kind of callback or notification from Consumer2 to a work dispatcher to allow the work dispatcher to release the next work item from Producer53 to the next available consumer (whether Consumer2 or otherwise). Am I overlooking something simple here? Is there a known pattern for this problem? I would appreciate any pointers.

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >