Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 231/537 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways(not just CLI commands per se, any program or theoretical background would do) to look at it.

    Read the article

  • Intel P6100 CPU and Mobile Intel® HM55 Express Chipset

    - by Christopher Painter
    I have an Asus K52F-BBR5 notebook that uses an Intel P6100 ( 2GHZ 15x multiplier) and HM55 Express Chipset. I'm looking to replace it's 3GB with 8GB. The Crucial database seems to indicate that a PC3-8500 CAS 7 and PC3-10666 CAS 9 will both work. I'm not up to date on the latest DDR3 nomencalature and I'm wondering which would provide better performance. The price difference is negligible. Drawing on past experiences from many many years ago I could make an argument for either based on sync/async bus speed arguments and CAS latency differences but the truth is I don't know enough about the HM55 chipset to know which would be the correct choice. Does anyone know the answer or point me to information that would help me make the choice? I'm pretty sure the performance difference will be somewhat negligible also but still I'd like to make the optimal choice.

    Read the article

  • Oracle on NFS vmdk beats native NFS!?

    - by fletch00
    Hi, my colleagues are pursuing this with Netapp and Oracle - but I thought I'd post here on the off chance someone else has seen this We have a RedHat 5 VM (fully up2date) running Oracle 11i with data disks mounted via the VM's linux kernel NFS using Oracle's recommended mount options and the performance is very inconsistent (Querys that should take < 2 seconds sometimes take 60 seconds) Funny thing is we can run the same queries perfectly consistently < 2 seconds on a VMDK residing on SAME NetApp NFS datastore! Makes me wish Oracle and NetApp collaborated as closely as VMware and NetApp did on the Virtual Storage Console we used to perfectly set the NFS options and keep them in compliance... We have tried a few Linux NFS options others have posted and not seen improvement so far. We are now creating VMDK's for the VM to replace the Linux NFS mounted and workaround the issue as our developers need consistent performance ASAP.

    Read the article

  • How to use router QoS?

    - by Nathaniel
    N00b question. How exactly does one use router quality of service settings? I've read up on it a bit but I'm still not exactly sure how to use it. So, my real questions are these: Generally, how does QoS work? How would one use it, say, to guarantee smooth performance in latency sensitive application (cough online gaming cough)? Performance for that sort of stuff bombs out on our connection when somebody is uploading files. I apologize if this is kind of sprawling. Suggestions to clean it up / edits welcome.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • How to add nvidia drivers after previous failure with linux mint?

    - by LessThanMe
    Before today, I had perfectly good drivers from nvidia for my linux mint (15) box. I decided to update it because my performance in TF2 is less than stellar, and then things went south. I used synaptic to install nvidia-331 and then rebooted, but when I selected Mint in GRUB I waited...and waited...and waited. Nothing happened, but the display stayed on (a completely black video was being output). So I went into recovery mode from GRUB, went to root access, and apt-get remove --purge nvidia*'d my way out of that mess, and installed nvidia-common. Now my performance in graphic intensive stuff (read: games, blender) sucks, so I've been through the same thing a few times trying to re-install nvidia-current. I just want to get it back how it was. Thanks for any help! Nvidia GTX 560

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Changing an MSSQL clustered index field from containing "random" GUIDs to sequential GUIDs - how wil

    - by Eyvind
    We have an MSSQL database in which all the primary keys are GUIDs (uniqueidentifiers). The GUIDs are produced on the client (in C#), and we are considering changing the client to generate sequential (comb) GUIDs instead of just using Guid.NewGuid(), to improve db performance. If we do this, how will this affect installations that already have data with "random" GUIDs as clustered PKs? Can anything be done (short of changing all the PK values) to rebuild the indexes to avoid further fragmentation and bad insert performance? Please give explicit and detailed answers if you can; I am a C# developer at heart and not all too familiar with all the intricacies of SQL Server. Thanks!

    Read the article

  • Reduce the I/O priority of Windows Backup (Windows Server 2008 R2)

    - by HelloSam
    I have a PostgreSQL running on Windows Server 2008 R2 x64 box. And I have scheduled a backup everyday from the RAID 1 DB disk to a dedicated standalone disk. They are SAS 15k on Dell PERC 6i. I am using the built-in Windows Server Backup for purpose. The problem is, whenever the backup process is kicked in, the database performance is hogged. I would say almost a 10x of performance reduction. From the resource monitor, the disk queue is in the double digit range when backing up, and less than 1 during the day. The disk activity is like ~30-50MB/s during backup, so I guess the hardware is acting normally, though wbengine.exe takes up most of the portions. I think reduce the IO priority of the backup process would be an answer, but I couldn't find a way to. Tuning process CPU priority does not seems to help.

    Read the article

  • NETAPP Fragmentation

    - by mdpc
    We all know that once a disk (or storage system for that matter) gets introduced into use, the performance degrades due to fragmentation of files. This seems to be why disk defragmentors are in fairly wide use on Windows boxes. And they do increase the performance substancially. As an aside, I haven't heard of many defragmentors in the Unix/Linux area. Despite the claimed WAFL protections for the NetApp, file fragmentation still will occur, especially with all the sparsely crated VMs. My question is does anybody do any sort of defragmention of such a storage system? Do you notice any measurable degration/improvement of either not doing/doing anything to address this situation? Does anybody do anything about it? If so what? Thanks

    Read the article

  • Cheapest server per gigabit throughput [closed]

    - by nethgirb
    I'm looking for a set of servers for performance testing a network, and secondarily testing some applications on the servers. Their most important task is simply to pump out data: from an application like memcached or just dumped from a large file in memory into a TCP flow (i.e., disk performance doesn't matter). This should happen over one or more 1 gigabit Ethernet ports, and the machines should run Linux (ideally), or perhaps Mac OS X or some other *nix. Other than that, there are few constraints (e.g., even something ARM-based could be fine). So here's the question: What's the cheapest server per gigabit? Price and power are both considerations.

    Read the article

  • Should my servers boot from VHD?

    - by tony roth
    I've been testing native vhd boot on several servers. It seems to be pretty transparent in terms of deployment and with my seat of the pants testing I have not noticed any difference in performance. The main reason that I want to boot vhd is due to their transportablility between different hardware and to hyper-v servers. the following roles will be installed. dfsr dhcp iis application server dc <- haven't tested this yet but see no reason why it won't work. With the above low impact (in terms of performance) roles do you thing booting from VHD is appropriate. thx

    Read the article

  • What is the best file system and allocation size for a USB flash drive?

    - by e-t172
    I'm considering using my 4 GB Kingston DataTraveler USB stick to store my Firefox and Thunderbird profiles for my laptop and desktop PCs. I want to maximize performance when using Firefox. The question is: what is the best file system and allocation size for the fastest Firefox profile operation on a USB flash drive? I'm using Windows 7 on both machines and I don't care about compatibility or the drive's lifetime. I just want to maximize performance. I could even use ext2 with the Ext2 IFS driver if that means it'll be faster. I'm assuming (perhaps I'm wrong) that putting a Firefox profile on a USB stick would be a "lots of small files" usage. In that case, it seems that NTFS would perform best, but I'm not sure. Besides I found nothing regarding the best allocation size to use. Considering that the default allocation size is designed for hard drives (which have different characteristics), I'm assuming that the default allocation size is not the best.

    Read the article

  • Why is the size of antivirus greater than that of anti malware? [on hold]

    - by Mistu4u
    Recently my computer was attacked by different kinds of worms and my computer was slowed down. So I tried to remove them by installing Avast free antivirus. The worms were copying themselves rapidly. But after installing avast, I observed it only blocked new copy of the worms to be created but could not delete the already created worms, even it could not find worms in a good amount. Then I downloaded Malwarewbyte Anti Malware and to my surprise I found out its service was way too better than Avast antivirus. It detected and deleted almost 2065 worms and malwares from my computer and now my computer is doing fine. As far as I know, anti malware functionality is also included in Antivirus, But then also its performance is poor. Now my question is if performance of antiviruses are meat to be poor than Antimalwares, then why the size of Avast is 179Mb and the size of Malwarebyte is 9.81mb?

    Read the article

  • differencing disk opinions

    - by troth
    I've read about the performance issues with dfferencing disks but I still think there is a solid place for them and thats the os boot partition. If I'm going to have 20 vms on a csv based volume I don't won't to waste the 20+ gigs per guest just for the os boot. If I get a good base disk with all of the most used applications installed and have the pagefile located somewhere else I don't think the delta's would be that great thus it should not create a performance issue. Also in a SAN based csv volumes does it make any sense in having the pagefile go to a seperate csv volume? Any opinions on this? thanks

    Read the article

  • Video memory buswidth vs video memory Bandwidth

    - by Mixxiphoid
    My current video card (9600GT) is dying and I'm searching for a new video card. Between acquiring my current one and now, I got a lot more knowledge about hardware and I want to use that to pick my new card. So I decided to not just buy some popular card blindly, but to search for a card able to handle my hardware requirements. I searched the specs at the NVidia site for the GT640 and was confused by the memory section and some questions raised. My current card's memory bus width is 256bit and has 1GB of memory. I checked Google about the importance of bus width. And all the links basically said the same 'The higher the number the more potential simultaneously traffic can be transferred'. This was already clear to me, yet there are currently a lot of new cards which are considered better than my current one with a lower bus width. To go in more detail about my question I copied the memory info from the NVidia site: GT 640 GT640 GDDR5 Memory Specs: Memory Clock 1.8 Gbps 5.0 Gbps Standard Memory Config 2048 MB 1024 MB Memory Interface DDR3 GDDR5 Memory Interface Width 128-bit 64-bit Memory Bandwidth (GB/sec) 28.5 40.0 What puzzled me is that the Memory Bandwidth seems to me the most important part, yet the lower bus width has the higher 'performance'. Is this due to the fact the memory interface is GDDR5 and is therefore able to have a higher memory clock speed (5Gbps)? If I am to buy a new video card, should I check the bus width? Memory clock? Bandwith? Amount of memory? My current card ahs 1GB memory, so I was searching for a 2GB memory card, but now I'm not so sure any more whether that is really 'better'. My main question: To me it seems that memory performance is made up by the combination of bus width and frequency. Is this true? If yes, why are there so many sites telling me I need to get a card with a high bus width? If no, then what IS important when it goes about memory performance on a video card. NOTE: The memory bandwidth is (almost) never displayed on vendor sites. How can I determine which card is better without knowing the bandwith?

    Read the article

  • W3 Total Cache or WP Super Cache?

    - by javipas
    I'm just preparing the setup of a new VPS where I will migrate a WordPress blog with a good traffic (currently, around 40k pageviews a day), and I was thinking about the caching strategy. I've found different ideas and recommendations, but from previous experiences I will setup a Nginx+PHP-FPM+MySQL (LEMP) system on a Linode VPS. I've read also about setting Nginx as a reverse proxy with Apache, and even using Varnish too, but I don't know if all of this can benefit the speed/performance of the blog (that's the only thing that will be installed on the VPS). The question now is... would you recommend W3 Total Cache or WP Super Cache? I've used W3 on some blogs, but I haven't noticed great benefits and don't need all its options, so I think I could give the veteran WP Super Cache a try. Besides, some users have complained about W3 complex configuration and lack of performance (even consumig more CPU) on some cases.

    Read the article

  • 64bit on core i5 with 2GB DDR3 RAM?

    - by Jacques
    Core i5 2.3 Ghz processor 512mb ATI HD 4570 2GB 1333 RAM 64bit Windows 7 Home Premium. Should I "down-grade" to 32bit? Does running 64bit with only 2GB RAM make the laptop weaker than running 32bit with 2GB RAM or is the performance pretty much the same? Is there any performance benefit to running 64bit with only 2GB RAM? Is there an impact on battery life between 64bit and 32bit. Should I maybe just add another 2GB RAM? Thanks.

    Read the article

  • Storing large amounts of small files into bigger files on Windows

    - by asmo
    Let's say I have 50 GiB of files that weights around 500 KiB each. My guess is that having, for example, 5 large files of 10 GiB each with the same content archived in them would be better for hard drive performance. Am I correct? Will there be a noticeable gain on an NTFS filesystem? ===================================================================== Finally, which tool could I use to group the files together while retaining the ability to modify the content of the archive with zero or minor performance loss? For example, I like TrueCrypt archiving because after mounting an archive file, it creates a drive which I can use seamlessly as if it was a normal drive. The only thing with TrueCrypt is that I don't need encryption/compression, only archiving.

    Read the article

  • Is there a network "tee"-alike with one leg returning to /dev/null ?

    - by Steff Davies
    I've just built a new PostgreSQL server for my employers, which is happily replicating using WALs. I'm now left with the problem of verifying its performance. One nice way which came up in conversation is to break replication with the slave caught up and then direct all production traffic to both servers, discarding the responses from the new server and returning those from the current one to the clients. Once we're sure performance is OK, we re-sync the slave and can fail over with confidence. Bliss. This would require a TCP proxy capable of opening two outgoing connections for each incoming one, and discarding the data returned from one of them, which is a tricky thing to google for, it seems. Do the assembled brains know of such a thing, before I dive into libevent and write one?

    Read the article

  • GRUB Error after Deleting Linux Partition

    - by Nironan12
    I was dual-booting with Windows 7 and Windows Vista each taking up half of my hard drive. In Windows 7 I used Easeus Partition Manager to shrink my Windows 7 volume 8GB. On the unallocated space, I installed Linux Mint 8 RC1. After a little bit of playing around with it, I booted in Windows 7, used EPM again and deleted the 8GB Linux partition. I then extended Windows 7 on the 8GB. After restarting my computer, all I get is a black screen and this: GRUB loading. error: no such partition grub rescue> I do not have a Windows 7 disk nor does my computer come with Startup Repair. What do I do?

    Read the article

  • A list of 'best practices' for extending the life between charges of a notebook battery.

    - by Tim Visher
    Hello Everyone, I'd like to compile a list of best practices for getting the most out of a single charge of a typical notebook battery (be it Li-Ion or Li-Poly). Sources would be great as well. I've heard, for instance, that the best things to do to improve battery performance (not the total lifetime of the battery, just single charge performance) are, in descending order of effectiveness: Turn your display all the way down. Turn off WiFi Turn off Bluetooth Spin down disks when they're not in use. etc… I'd like to get sources together for these and other tips for extending life-between-charge for any battery on any notebook (as these really are all about Demand Management rather than Life lime extension. Thanks!

    Read the article

  • Provision storage in SAN encironment

    - by wildchild
    hi,.. Can somebody help in understanding how Lun Provisioning is done in clariion based on client requirement. Say, a client needs one 50 GB space on the host( for application) ,two -350 GBs ( for DB and stuff) on the same host and i have 4 raid groups with space of 200 GB in one 50 GB in second and 400 GB and 400GB in rest two all having 5 disks. And , I 'm using Raid 5 only. How do i provision keeping performance in mind( so that performance hit is not there)?..And i need to understand the concept of meta head as well. Please help me by elaborating the above scenario..I am learning to provision storage in non productive environment( b'fore going ahead and working on client prod server) and i have been learning a lot from SF ...I request the seniors here to help me understand it better. Thanks for reading

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >