Search Results

Search found 4699 results on 188 pages for 'ram'.

Page 38/188 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How can Rackspace beat DigitalOcean's Pricepoint? [on hold]

    - by Matt Jensen
    I have recently discovered Digital Ocean and I have found it to be a relatively nice experience for small staging servers, and the thought occurred to me, why am I paying $267~ for a server on Rackspace (40GB RAM, 160 GB Drive, 2 vCPUs, 400 Mb/s) when Digital Ocean offers a server for $40 (40 GB RAM, 60GB Drive [storage is not a concern of mine], 2 vCPUs, ?Mb/s)? Does Rackspace offer some kind of obvious advantages in Transfer speed/bandwidth? My applications are small startups that for the immediate future will only have about 200-300 concurrent users at once.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Celery - minimize memory consuption

    - by Andrew
    We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT Every process uses minidom(xml files are < 500kb, simple structure) and urllib. Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory?

    Read the article

  • Is a 2006 Mac Mini's processor upgradeable to a Core 2 Duo?

    - by chiurox
    I have a Mac Mini with Core Solo 1.5GHz, 512mb RAM, 60gb HDD, etc... I know, it's very old but since it's lying around here, I wanted to bump it up for general usage and some experimental iPhone development. Also, Snow Leopard can't be installed as it doesn't have enough RAM. I browsed around but I'm not sure if this Mac Mini's motherboard accepts a Core2Duo (at least a 2.0GHz). If anyone could inform me which generation of Core2Duo it still accepts, I'd be grateful.

    Read the article

  • Cassandra on heterogeneous servers

    - by happy-coding
    I am currently running 4 cassandra nodes with the following hardware in a Apache Cassandra cluster: AMD Athlon 64 X2 6000+ 8G RAM 750G hard disk It shows not such a good writing performance and a really bad read performance with sometimes also timeouts. I was wondering if it makes sense to add 2 nodes with a different hardware (8 CPUs and more RAM) to improve this. Or does a cassandra cluster works best with the same hardware in every node? Thanks & best regards

    Read the article

  • I have been told to accept one error with Memtest86+

    - by DustByte
    Bought a new computer back in August with 4x4 GB RAM. Had problems with the RAM. They sent me four new sticks, which also generated errors. Singled out four sticks (from the eight I now had) that didn't generate any errors. Discovered by coincident a new RAM error last week (this time no BSOD). Contacted the company. According to them there have been issues with a bad stock from last summer so I got two tested 8 GB sticks sent to me. Been running Memtest86+ over the weekend. After 20 hours I got an error (see attached photo). The test has now been running for 37 hours but so far only this one error. I contacted the company where I bought the computer. They wrote back: I wouldn't worry about hat one fail. We have had similar situations here whereby it passes numerous times but then fails once. We think it's an issue with memtest, after all memory is faulty or it isn't so you can't really have it pass a few times, fail the next time around and then pass again! Please trust me on this and continue with the memory we sent you and if your problems continue we'll look at getting it replaced again. I gather from other forum posts that many people do not accept a single error. What could this single error signify, faulty RAM or a glitch in the MEMTEST program (or other)? Update: From the helpful comments below I conclude that an occasional (and rare) "random" error could occur and be acceptable, but repeated errors at the same address would indicate malfunction. Memtest has now run for 45 hours and I still have only one error. For everyone's information, I will keep running the test. In less than two days I am going away for a month. I will most likely leave Memtest running. As I do not have a UPS there is a risk that a power outage will ruin the experiment. The computer is a desktop so I cannot bring it with me (which would curiously have exposed it to more cosmic rays as I will be flying ;)).

    Read the article

  • Is a 2006 Mac Mini’s processor upgradeable to a Core 2 Duo?

    - by chiurox
    I have a Mac Mini with Core Solo 1.5GHz, 512mb RAM, 60gb HDD, etc... I know, it's very old but since it's lying around here, I wanted to bump it up for general usage and some experimental iPhone development. Also, Snow Leopard can't be installed as it doesn't have enough RAM. I browsed around but I'm not sure if this Mac Mini's motherboard accepts a Core2Duo (at least a 2.0GHz). If anyone could inform me which generation of Core2Duo it still accepts, I'd be grateful.

    Read the article

  • Is the following combination of components valid to function as a desktop computer? [closed]

    - by Gideon Potgieter
    Could someone with more PC building experience than me tell me whether these PC components can cooperate fully as a self-made PC? Processor: Intel Core i5-3570K Video card: Asus Radeon HD 7870 Motherboard: Gigabyte GA-Z77-D3H RAM: Corsair CMZ16GX3M2A1600C10 Vengeance 16GB 1600MHz CL10 DDR3 (x2) Storage: Western Digital WD1002FAEX (x2) Display: Samsung S24B300HL Sound: Logitech X140 Chassis: Thermaltake V4 Black Edition VM30001W2Z Power supply: Seagate OEM 500W Builder PSU Optical drive: Asus DRW-24B1ST Thanks in advance! (btw, I know 32 GB RAM is unnecessary, but I want to buy it to use as a reserve)

    Read the article

  • Sleep uses more power than I expect

    - by Niklas
    When my new Dell laptop with Windows 7 Home Premium and 4GB RAM sleeps (not when it hibernates), it will drain the battery overnight. My old Dell laptop with XP Pro (2 GB RAM) could sleep for days without running out of battery. Is it normal that Windows 7 sleep is this power-hungry or should I troubleshoot my new machine? Edit: I know how to set the different sleep/hibernate settings. That is not what I'm asking.

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • Anyone tried boosting Windows performance by putting Swap File on a Flash drive?

    - by Clay Nichols
    Windows Vista introduced ReadyBoost which lets you use a Flash drive as a third (after RAM and HD) type of memory. It occurred to me that I could boost peformance on an old PC here w/ Win XP (32 bit, max'd at 4GB RAM) by putting it's swap file (page file) on a flash drive. (Now, before anyone comments: apparently Flash drives (10-30MB/s transfer rates) are slower than HDD (100+ MB/s) (I'm asking that as a separate question on this forum).

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • What does CPU Time consist of?

    - by Sid
    What does CPU time exactly consist of? For instance, is the time taken to access a page from the RAM (at which point, the CPU is most likely idling) part of the CPU time? I'm not talking about fetching the page from the disk here, just fetching it from the RAM. Thanks

    Read the article

  • On ubuntu 10.04, what is the recommended RoR stack?

    - by Kurucu
    I can't find clear answers / methods on this. As seen elsewhere, passenger and RoR under apache gobble up ram on my VPS. I've tried a multitude of stacks and implementations, currently resting on a sub optimal apache/cgi/rails configuration, which has swapped my ram usage for CPU time and slow response to requests. Can anyone recommend an efficient and preferably simple to administer method of setting up rails apps in ubuntu 10.04 server?

    Read the article

  • Virtual Memory and SSD

    - by Zombian
    While studying for the A+ Exam I was reading about SSD's and I thought to myself that if you had a mobo with a low RAM limit you could use a dedicated SSD purely for Virtual RAM. I looked up some info on line and the info I found said that this was a poor practice but didn't explain why. Why shouldn't SSD's be used for Virtual Memory and what are your thoughts on a dedicated Virtual Memory drive? Thank you!

    Read the article

  • How many Tomcat instances can one application server handle?

    - by NetworkUser
    My network engineer states I am part of a cluster where two apache application servers(512G RAM each) have 142 instances each of Tomcat (of which my company represents 6 each with 2G RAM). This seems like a lot and my latency issues move with the hour of the day - 7AM CST software functions fine, 10AM CST - system slows significantly this slowness continues until 6PM CST. My question is how many Tomcat instances can one application server handle?

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • Server purchase advice AMD Opteron

    - by maruti
    Dell PE2970 with AMD Opteron 6core 2431 2.4GHz + 64GB 667Mhz RAM - 2435 is not available with Dell now Dell R905 with AMD Opteron 8435 2.6 GHz + 64GB 800Mhz RAM - but this CPU is 4-8 way, I have chosen only 2P config Both are very close on price and I am leaning towards the R905, please advise.

    Read the article

  • Debian Squeeze and available memory (1GB absent)

    - by user66279
    Hi, here I've a dedicated server with 12GB RAM and running Debian Lenny x64. dmesg | grep Memory [ 0.004000] Memory: 11917152k/12259740k available (2279k kernel code, 333820k reserved, 1022k data, 216k init) Since some days, I've another dedicated server (nearly same hardware), but with Debian Squeeze x64 (installed via debootstrap, Kernel 2.6.32-5-xen-amd64) dmesg | grep Memory [ 1.551510] Memory: 6864620k/8151916k available (3146k kernel code, 1057736k absent, 229560k reserved, 1901k data, 600k init) what does absent memory mean? And how can I get 1GB of RAM back?

    Read the article

  • Output of free -m on a Linux server

    - by cat pants
    I can see from this page here: http://www.linuxatemyram.com/ That the correct amount of free ram is on the "-/+ buffers/cache" line. The extra ram being used is for disk caching. However, I noticed that the total amount of memory used listed in "-/+ buffers/cache" line is significantly less than the sum total of the "RES" column of the processes shown in top. And AFAIK, the "RES" column is how much physical memory is being used by a process. How do you explain this discrepancy?

    Read the article

  • How does the linux kernel manage less than 1GB physical memory ?

    - by TheLoneJoker
    I'm learning the linux kernel internals and while reading "Understanding Linux Kernel", quite a few memory related questions struck me. One of them is, how the Linux kernel handles the memory mapping if the physical memory of say only 512 MB is installed on my system. As I read, kernel maps 0(or 16) MB-896MB physical RAM into 0xC0000000 linear address and can directly address it. So, in the above described case where I only have 512 MB: How can the kernel map 896 MB from only 512 MB ? What about user mode processes in this situation? Where are user mode processes in phys RAM? Every article explains only the situation, when you've installed 4 GB of memory and the kernel maps the 1 GB into kernel space and user processes uses the remaining amount of RAM. I would appreciate any help in improving my understanding. Thanks..!

    Read the article

  • image archive VS image strip

    - by DevA
    Hi, i've noticed that plenty of games / applications (very common on mobile builds) pack numerous images into an image strip. I figured that the advantages in this are making the program more tidy (file system - wise) and reducing (un)installation time. During the runtime of the application, the entire image strip is allocated and copied from FS to RAM. On the contrary, images can be stored in an image archive and unpacked during runtime to a number of image structures in RAM. The way I see it, the image strip approach is less efficient because of worse caching performance and because that even if the optimal rectangle packing algorithm is used, there will be empty spaces between the stored images in the strip, causing a waste of RAM. What are the advantages in using an image strip over using an image archive file?

    Read the article

  • memory available to 64bit Fedora guest under 32bit XP host running virtualbox

    - by Chris Card
    I have successfully installed a 64 bit Fedora 11 guest os using VirtualBox on a host machine (AMD64) running 32 bit Windows XP . At the moment the host machine has 2 Gb ram installed and I've allocated 1 Gb to the guest, which all works well. The host machine can hold a maximum of 4 Gb ram, so I was wondering if it's worth buying an extra 2 Gb for it. I know that 32 bit Windows XP can't use all of the 4 Gb, but can the guest os use any of the ram that the host os can't use?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >