Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 99/409 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Macros in Excel 2010 hangs

    - by Ahmad
    I have a spreadsheet with several macros. Generally, when previously using Excel 2007, a user clicks a button and everything works as expected (calculations, some email sending & file I/O). Typically, the expected run-time is about 90 seconds. The spreadsheet is a xlsm file created with Excel 2007. With Excel 2010 however, the same user process results in a non-responsive excel and forces us to kill excel from the task manager. Some note that I have gathered so far in trying to debug this issue: When monitoring CPU usage, it seems that Excel does start the macro. CPU usage increases as expected to about 47% for a few seconds. Excel.exe than drops to 0% usage and I now have a non-responsive Excel (even after 1 hour). If I set debug break points across modules and different functions and step through the code (after clicking the button) , the process works as expected albeit much slower. To add, there were no exceptions. I am at a complete loss as to what the issue may be. I initially thought it may be the add in that is being used but that was debunked by point 2. This seems to be a very odd situation. I can provide more information if required, but I'm at wits end about the root cause could be. I need help in diagnosing and resolving this issue.

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

  • Serious 64-bit laptop

    - by Daniel Gehriger
    For the past couple of years, I have been using an IBM Thinkpad T60p for daily work (software development, desktop & embedded). I am extremely satisfied with this machine, due to its robustness. It also has a few features I depend on: a high resolution display: 15.0" TFT FlexView display with 1600x1200 (UXGA); excellent keyboard; decent graphics and CPU performance. Some of the software I develop benefits from larger amounts of RAM, and 3GB (Windows 7 32-bit) or 4GB (Windows 7 64-bit on T60p) are no longer sufficient. My customers run desktop computers with 20GB and more, and I need to have at least 8GB to at least be able to run reasonable test cases. So I'm shopping around for a new laptop, but I'm struggling to find anything that matches my requirements: must run Windows 7 64-bit Pro or higher; must support at least 8GB of RAM (more is better) high screen resolution! While I prefer 4:3 I can live with wide screen. But I really hope to find something with a vertical screen resolution similar to what I have now... portable, so < 16" but = 14" I realize that FlexView isn't available anymore, but I'd like to avoid a glossy screen if possible. decent (not more) graphics performance, ideally hybrid (I'm doing a lot of CAD, never games). good keyboard reasonable CPU -- but I'm still fine with my current Core 2 Duo, so that shouldn't be too complicated. The T60p fits all those requirements, except the 8GB of RAM. Can you help me find a current notebook that would match most of them? I don't mind changing brand. Thanks!

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • i7-980X at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. Intel i7-980X@3,33 GHz with 12 GB of Team Group 1600 MHz DDR3 RAM. When we use Gurobi the computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an i7 930 all cores are at a near 100% level even after longer solution times. We first thought that the Hard disk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of RAM we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • Can applications use all of the memory in Windows 8?

    - by Barleyman
    Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit?

    Read the article

  • i7 x980 at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. intel i7 X980@3,33 Mhz with 12 Gb of Team Group 1600 MHz ddr3 Ram. When we use Gurobi The Computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an I7 930 all cores are at a near 100% level even after longer solution times. we first thought that the Harddisk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of Ram we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any Ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Laptop freezing every few seconds, including screen + sound

    - by zenstealth
    Just a few days ago, my Windows 7 HP dv4170us (1.76Ghz CPU, 1GB Ram) laptop started to freeze every other second where everything on screen and and sound (such as a song playing in iTunes) would just freeze until I bash it violently (without actually breaking the laptop) or wait for a couple of more seconds. I think it started one night when I noticed that a USB mouse of mine stopped working, and it displayed random "Device was not recognized" errors. I just unplugged the mouse and ignored it. Skip forward to the next day, is started freezing, and as of today I can't get my computer to not keep freezing. I tried to backup my files onto an external hdd, but it almost corrupted the drive. I ran 4 complete virus scans using MSSE and MalwareBytes (both quick and full scans), and they all came up clean. In the Task manager, the CPU usage is on a constant max, and so is the RAM (if I have just a few apps running, I only have like 30Mb of free RAM left). Also, on the outside of my laptop, right above where the CPU is located, it's very, very hot. I suspect that something is wrong internally within inside of the computer, but I'm not sure. It also does the same thing when booted into Ubuntu.Does anyone know what could be wrong with it?

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • GPG - why am I encrypting with subkey instead of primary key?

    - by khedron
    When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me?

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • What can cause the system to freeze in a way where even the reset button takes a long time to react?

    - by ThiefMaster
    What can be the reason for system freezes that are so "hard" that even the hardware reset button takes about 3 seconds until it actually resets the system (and then it actually powers down and up again instead of doing a "clean" hard reset like when pressing it during a normally running system). Since it initially happened mainly while playing videos from YouTube I suspected the graphics card - however, I replaced it recently and it did not change it. It still happens from time to time (and sometimes more often, like a few times times in the last few hours). The system is running Windows 7 - but I don't think this matters since I don't think any software, not even the OS, can actually affect the reset button's behaviour. The PC is not overheated and the freezes happen randomly. There is also no malware on the system. The CPU is an Intel Core i7-920 on a Gigabyte EX58-UD5 mainboard. What could be the cause for this problem? Faulty RAM? I did not run a full memtest86 check yet, but I wonder if there is a more likely issue than faulty RAM - checking 12G of ram does take some time after all! There are no entries in the event log - but that's what I expected since the system freezes so hard that I doubt it has time to write anything to any log.

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • Verizon overbilling me ?

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • Growing a small hosting company [closed]

    - by user2353007
    We currently have a few servers, 1 WHM VPS (2GB), 1 MS SQL VPS (2 GB), and 1 IIS VPS (2GB). The VPS servers are doing fine as far as uptime and response times but we would like to add the following features. 1) monitoring with load statistics 2) failover I have looked a Zabbix, Zenoss, Nagios, and a couple of other cloud solutions like monitor.us and watchdog from Zerigo. Ideally for the monitoring solution. Our current hosting company suggested we get a dedicated server or VPS and install load balancing software (not sure I like that idea). I've looked into Rackspace and Amazon load balancers which seem like the most feasible solutions for load balancers. Does anybody have any input on the monitoring and load balancing products I'm looking into? Monitoring should monitor uptime as well as give reports on memory usage, disk usage, processor usage, and which processes/websites/users are responsible for the load. It would be ideal if the load balancer worked with any IP. Not sure if either Rackspace or Amazon load balancers would allow load balancing with servers outside their datacenter. Thank you.

    Read the article

  • VzAccess Manager.

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • VTD-XML Parsing Performance (speed critical factor). Requesting Feedback/Comments

    - by andreas
    Hello, I am about to use VTD-XML (found at http://vtd-xml.sourceforge.net/) but I am interested in getting real-case usage feedback, by any one that has used the library and has any comments. At the URL (http://vtd-xml.sourceforge.net/) there are benchmarks but if someone has used VTD-XML and has comments FOR it I would like to hear them. Speed is a critical factor in the application and comments after real-case usage, by developers, is what i am looking for. Regards,

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >