Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 5/595 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

    - by Panayiotis Karabassis
    I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory, the technician that installed saw this, and lowered the bus speed. Is this a possibility?

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Relationship between RAM & processor speed

    - by deostroll
    RAM is just used for temporary storage. But since this storage is in the cpu memory (RAM) it is fast. Programs can easily read/write values into it. I've noticed more the RAM less time it takes for the application to load/execute. But doesn't this actually depend of the processor speed (MHz or GHz values). I am wondering what is the science/relationship between processor speed and RAM.

    Read the article

  • Low USB transfer speed

    - by dhasu
    Hello, I am using Windows Vista, my Motherboard is Intel D102GGC2 The transfer speed through USB is very slow on my PC, just 2-3 MBps... I also use Ubuntu 9.10, and surprisingly the USB transfer speed is as expected, ie 16-17 MBps. So everytime i have to transfer Huge files i have to reboot into Ubuntu. What might be the Problem ??? I checked the Intel website for the ubdated drivers for my mobo, but they dont support my mobo now. Thanks

    Read the article

  • VPS Hosting Transfer Speed Question

    - by user31983
    Hi. I bought a VPS hosting(Bandwidth 500G) at US, and i connect it from China. How to improve the transfer speed when I connected my VPS hosting using WinSCP tool. I found the speed is very slow, just several huandard B/s some time. What's the meaning of Bandwidth 500G? It means the traffict limited? thank you.

    Read the article

  • How to throttle bandwidth speed from VPS

    - by Burning the Codeigniter
    I was wondering if you can throttle/boost the download speed from my VPS to me, because now and often I download a database which is quite big, typically 5-7GB. My side is quite slow (typically I have max speeds of 100Kb/s) so I was wondering if it's possible to throttle the bandwidth speed from my server to me that it could make the download much faster. If this is possible, how can I do this? My server is running Ubuntu 11.04 with a 100Mbps line.

    Read the article

  • Actual High Speed USB flash drive

    - by CSkau
    I'm looking to upgrade my EEE 1000H by possibly replacing the HDD with simple (internal) usb connected storage. The problem I'm having now is that I can't seem to find any actual high speed usb sticks. They all proclaim high speeds, but usually turn out to be ~30 mb/s - much lower than the 60 mb/s (480 mbit/s / 8 ) I understand USB 2.0 is at - no ? Can anyone enlighten me as to why no USB sticks seem to go past that low bar or alternatively point me in the direction of some actual high speed usb sticks ? Any help is greatly appreciated :) Cheers!

    Read the article

  • Grid computing projects similar to NGrid (thread based)

    - by DivdeAndConquer
    Hello there, first time poster. This is a great place for reading about programming problems. I've been looking at some grid computing projects for .Net/Mono and stumbled upon NGrid. NGrid seems really appealing for grid computing because you simply pass threads to it and there is very little modification you have to make to your code. However, I see that NGrid (http://ngrid.sourceforge.net/?page=overview) is still at version 0.7 and hasn't been updated since May 2008. So, I'm wondering if there are any other grid computing projects that use a similar thread-passing architecture and if anyone has had success using NGrid.

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • Understanding: cloud-server, cloud-hosting, cloud-computing, the cloud

    - by Abel
    There's a lot of buzz about these subjects and there seems little consensus on the terms. Is that just me not understanding the subject, or is there a clear meaning for each of these terms? Are there more elaborate terms or descriptions that describe what a cloud provider has, is or offers? EDIT: rewritten question, apparently it was unclear, partially due to the bloat I added.

    Read the article

  • Web Site Serving, Cloud-Computing, oh, my

    - by Frank
    I'm planning a software based service. To give it a bit of context (type of traffic), assume it similar to facebook in nature (with a little GitHub thrown in). I've been trying to understand my different hosting options. I've been using a shared host with GoDaddy for years just fine. I currently host a Wordpress web site there and I've not had any problems. Quite frankly, they've taken good care of me. However, the nature of a shared hosting environment is limited in nature. For example, I can't do anything but host a web site there. For example, I can not run a Mercurial server. Last time I attempted to build a web application with the intention of eventually launching it via GoDaddy, I ran in to all sorts of troubles because it was shared-hosted. Assembly issues, etc. At the time, the cost and time sank my project. (The lack of direct access was also frustrating.) (to be fair to godaddy, this was over 3 years ago) I've been looking at Rackspace or Amazon as a possible cloud solution but it seems to be just processing power and bandwidth (and an OS). From what I understand, I'd need to get Apache and MySQL Working on my own. The way cloud hosting is priced, however, seems appealing. I figure my final option might be to use a virtual private host. I think this would be more flexible than a shared-host site but less scalable than a cloud based server. So, I guess my question is what is an appropriate solution for someone who intends to build a web application service? I figure that I need to establish a hosting environment now rather than later so I can plan to effectively use the environment. I'd prefer to be fairly economical to start out with. I really can't afford to pay $999 (or even $99) while I build up the site and get the core functionality online but at the same time, I'd like to have the selected environment grow as needed. Thank you.

    Read the article

  • How secure is cloud computing?

    - by Rhubarb
    By secure, I don't mean the machines itself and access to it from the network. I mean, and I suppose this could be applied to any kind of hosting service, when you put all your intellectual property onto a hosted provider, what happens to the hard disks as they cycle through them? Say I've invested million into my software, and the information and data that I have is valuable, how can I be sure it isn't read off old disks as they're recycled? Is there some kind of standard to look for that ensures a provider is going to use the strictest form of intellectual property protection? Is SAS70 applicable here?

    Read the article

  • Why is Python used for high-performance/scientific computing (but Ruby isn't)?

    - by Cyclops
    There's a quote from a PyCon 2011 talk that goes: At least in our shop (Argonne National Laboratory) we have three accepted languages for scientific computing. In this order they are C/C++, Fortran in all its dialects, and Python. You’ll notice the absolute and total lack of Ruby, Perl, Java. It was in the more general context of high-performance computing. Granted the quote is only from one shop, but another question about languages for HPC, also lists Python as one to learn (and not Ruby). Now, I can understand C/C++ and Fortran being used in that problem-space (and Perl/Java not being used). But I'm surprised that there would be a major difference in Python and Ruby use for HPC, given that they are fairly similar. (Note - I'm a fan of Python, but have nothing against Ruby). Is there some specific reason why the one language took off? Is it about the libraries available? Some specific language features? The community? Or maybe just historical contigency, and it could have gone the other way?

    Read the article

  • Cloud computing?

    - by Shawn H
    I'm an analyst and intermediate programmer working for a consulting company. Sometimes we are doing some intensive computing in Excel which can be frustrating because we have slow computers. My company does not have enough money to buy everyone new computers right now. Is there a cloud computing service that allows me to login to a high performance virtual computer from remote desktop? We are not that technical so preferrably the computer is running Windows and I can run Excel and other applications from this computer. Thanks

    Read the article

  • cloud computing in .net 4.0

    - by HotTester
    Since the launch of .net 4.0 the buzz word has been cloud computing. But very little is said and discussed about it in perspective of .net technologies. Further is it really the worth to invest or do we have sufficient current technologies that can handle what cloud computing offers ? Can you please describe it and an example would be quite helpful ! Thanks in advance.

    Read the article

  • how Infiniband speed is related to processor speed

    - by user223231
    I have two exactly the same servers and very curious how to make Infiniband interconnection between them? Both servers' basic specs are: CPU: 32GHz = 2x Intel Xeon X5650, 6 core, 2.66GHz and RAM: 24GB per server (edited) How determine what speed of Infiniband will be enough for perfect interconnection? SDR, DDR, QDR or FDR? My logic is 32Ghz = 32Gb/s and 40Gb one is enough, am I right or it is not that simple?

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • HP SmartArray P400 has slow read and write speed

    - by Tadas D.
    I have desktop in which I installed HP SmartArray P400 controller with two HP DF0146B8052 hard drives. I made RAID0 logical volume from them, but I am getting 20MB/s write speed and ~140-120MB/s read speed. Also there is quite low scatter for benchmark results (I am getting quite nice line) and it looks like controller is "capping" my speeds. I tried reseting controllers configuration and I haven't found any settings in HP ACU (Array Configuration Utility) to help me. I am using Windows 7 Ultimate and M4A78 board Does anyone have ideas what could be wrong? Also I am attaching diagnostic results.

    Read the article

  • Can I speed up cygwin's fork?

    - by Andrew Aylett
    I came across a post discussing the speed of forking in Cygwin, giving an expected 'fork rate' in Windows XP of around 30-50 per-second (link) I've got a Core 2 duo (1.79GHz) which I would expect to get comparable results, but it's only managing around 8 forks per second (and sometimes a lot fewer): $ while (true); do date --utc; done | uniq -c 5 Wed Apr 21 12:38:10 UTC 2010 6 Wed Apr 21 12:38:11 UTC 2010 1 Wed Apr 21 12:38:12 UTC 2010 1 Wed Apr 21 12:38:13 UTC 2010 8 Wed Apr 21 12:38:14 UTC 2010 8 Wed Apr 21 12:38:15 UTC 2010 6 Wed Apr 21 12:38:16 UTC 2010 1 Wed Apr 21 12:38:18 UTC 2010 9 Wed Apr 21 12:38:19 UTC 2010 Can you suggest anything I might be able to do to speed things up? This machine acts a lot slower in Cygwin than others I've used before which actually were a lot slower.

    Read the article

  • High quality / high speed dvd reader for Mac Pro

    - by deadprogrammer
    I have a high end Mac pro, but one thing in is that I'm unhappy with is a DVD drive. It's a Hitachi GH41N. Apple calls it a "superdrive", but it's anything but. The damn thing makes an amazing amount of noise, and isn't too fast either. What I want is a state of the art, fast, quiet DVD reader, preferably not even a burner. What should I get?

    Read the article

  • Maximum burn speed keeps decreasing from Nero?

    - by Bob King
    I have a 16x DL DVD burner in my work machine (XP SP3). I'm using 8x TDK DVD+R media. The first dozen or so disks burned fine using Nero, but after that I started to coaster every disk. I asked Nero to calculate the maximum speed, and it calculated it at 4x. This worked for a few disks, then the same issues. I'm currently burning at 1.2x. I've since tried other brands and full 16x compatible disks, I can't get my burn speed to be recognized as any faster than what it's currently at. I've tried uninstalling Nero. I've tried burning directly in Windows, and also tested an MP3 CD in iTunes, and no luck. Any suggestions, short of reinstalling Windows, would be great!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >