Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 113/258 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • High Sqlservr.exe Memory Usage

    - by user18576
    I have a problem with sqlservr.exe (version 2008). It use a more memory. I checked on windows taskbar manager, sqlservr.exe usage ( Mem usage - 8GB Ram). I dont know how can I fix it.Got the following metrics of the server using Perfmon: SQLServer:Buffer Manager Buffer cache hit ratio 13 SQLServer:Buffer Manager Page lookups/sec 46026128096 SQLServer:Buffer Manager Free pages 129295 SQLServer:Buffer Manager Total pages 997309 SQLServer:Buffer Manager Target pages 1053560 SQLServer:Buffer Manager Database pages 484117 SQLServer:Buffer Manager Reserved pages 0 SQLServer:Buffer Manager Stolen pages 383897 SQLServer:Buffer Manager Lazy writes/sec 384369 SQLServer:Buffer Manager Readahead pages/sec 69315446 SQLServer:Buffer Manager Page reads/sec 71280353 SQLServer:Buffer Manager Page writes/sec 12408371 SQLServer:Buffer Manager Checkpoint pages/sec 7053801 SQLServer:Buffer Manager Page life expectancy 735262 SQLServer:General Statistics Active Temp Tables 161 SQLServer:General Statistics Temp Tables Creation Rate 3131845 SQLServer:General Statistics Logins/sec 2336011 SQLServer:General Statistics Logouts/sec 2335984 SQLServer:General Statistics User Connections 27 SQLServer:General Statistics Transactions 0 SQLServer:Access Methods Full Scans/sec 34422821 SQLServer:Access Methods Range Scans/sec 2027247756 SQLServer:Access Methods Workfiles Created/sec 49771600 SQLServer:Access Methods Worktables Created/sec 28205828 SQLServer:Access Methods Index Searches/sec 4890715219 SQLServer:Access Methods FreeSpace Scans/sec 21178928 SQLServer:Access Methods FreeSpace Page Fetches/sec 21226653 SQLServer:Access Methods Pages Allocated/sec 41483279 SQLServer:Access Methods Extents Allocated/sec 4743504 SQLServer:Access Methods Extent Deallocations/sec 4806606 SQLServer:Access Methods Page Deallocations/sec 41419137 SQLServer:Access Methods Page Splits/sec 23834799 SQLServer:Memory Manager SQL Cache Memory (KB) 29160 SQLServer:Memory Manager Target Server Memory (KB) 8428480 SQLServer:Memory Manager Total Server Memory (KB) 7978472 Some body could help me please.And I really want to know the cause for the above.

    Read the article

  • Building new computer, turns on, but no post

    - by addybojangles
    Pardon my ignorance here, finally decided to put together a computer and egads. I purchased a new motherboard, power supply, processor, video card and memory. ASUS M4A79XTD EVO AM3 AMD 790X ATX AMD Motherboard OCZ Fatal1ty OCZ550FTY 550W ATX12V v2.2 / EPS12V SLI Ready 80 PLUS Certified Modular Active PFC Power Supply AMD Phenom II X4 965 Black Edition Deneb 3.4GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Quad-Core Processor XFX HD-577A-ZNFC Radeon HD 5770 (Juniper XT) 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFireX Support Video Card G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ (originally had links for you guys, but I lack the rep, sorry!!) And I've got it all in the tower. I put in power supply, installed processor on motherboard, installed heatsink, put in ram, and I am using an older IDE hard disk. When I start the computer, the monitor tells me "check signal cable." As far as I can tell, the heatsink on the processor is spinning, the power supply is on (obviously), and the green LED on the motherboard is on. I originally only had the bigger output plugged in to the motherboard (what I saw in a YouTube vid as well as the mobo instructions), but after doing some research, it said plug in the other ATX power supply. Which I did. And trying to power the computer results in nothing. No beeps on startup, no post, anyone have any ideas? Your ideas and help is greatly appreciated.

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • How To Remove Bottleneck with Squid Caching Proxy

    - by Volomike
    I'm more of a LAMP web developer trying to help the sysop. When I joined a project, I inherited some old PHP spaghetti code. Some of that code is that it goes out to a third-party website (let's call it thirdparty.com) and pulls down content with an HTTP-GET request. Unfortunately, the way the code is designed, it needs to do this several times a minute. When we looked at the bottlenecks on the server with 'netstat -a', we saw that connections to thirdparty.com were constantly running when this content would be plenty fine to be gathered once a day. What I need to know is if the Squid Proxy Caching Server is the solution we need? I'm guessing that this might let us have it pretend to be thirdparty.com on the network. If the web server needs to query thirdparty.com, it hits Squid instead. Squid can then determine whether it needs to supply content from cache or if it needs to go to thirdparty.com for fresh content. Is this the solution we need? And second, is this easily configured and only to cache thirdparty.com requests?

    Read the article

  • One nginx rules for lots of subdomain

    - by komase
    I have lots of subdomain in a server. Every subdomain has its own Drupal boost rules, like in below codes: server { server_name subdomain1.website.com; location / { root /var/www/html/subdomain/subdomain1.website.com; index index.php; set $boost ""; set $boost_query "_"; if ( $request_method = GET ) { set $boost G; } if ($http_cookie !~ "DRUPAL_UID") { set $boost "${boost}D"; } if ($query_string = "") { set $boost "${boost}Q"; } if ( -f $document_root/cache/normal/$host$request_uri$boost_query.html ) { set $boost "${boost}F"; } if ($boost = GDQF){ rewrite ^.*$ /cache/normal/$host/$request_uri$boost_query.html break; } if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/subdomain/subdomain1.website.com$fastcgi_script_name; include fastcgi_params; } } I adding all subdomain rules manually from time to time. The size of ngin.conf has become too big. So, I need one nginx rules which do: subdomain1.website.com pointing to /var/www/html/subdomain/subdomain1.website.com subdomain2.website.com pointing to /var/www/html/subdomain/subdomain2.website.com subdomain3.website.com pointing to /var/www/html/subdomain/subdomain3.website.com ...and so on (So that no more adding rules for subdomain .website.com I need in the future.)

    Read the article

  • How often is CRL refreshed, and how to force it to be?

    - by lockstock
    I have a web service running under IIS 7 that requires an X509 client certificate. I know that the server that it rus on needs access to DigiCert.com in order to be able to get the CRL (Certificate Revocation List). There is a need to change our proxy so I am attempting to investigate the impact of doing so. I have removed the global proxy settings using the command netsh winhttp proxy refesh, and also deleted the CRL cache using the command certutil -URLcache CRL delete. However, after doing this, all calls to the web service still succeed. This suggests to me that I am missing something here. So; If the CRL cache is cleared and the server has no way of refreshing the CRL, why do web service requets not return http 403?. I have been unable to find adequate information from googling nor from my colleagues. The reason I want it to fail is that I will not be confident that the new proxy settings work until I can see it broken first, if that makes sense. I would also like to be able to force the CRL to be refeshed in order to ensure that the new proxy settings work

    Read the article

  • Use to host email for a domain name that wasn't our primary domain name

    - by drpcken
    Exchange 2007 on an Server 2003 active directory. My primary domain (MyMainDomain.com) controller also hosts dns and dhcp. I have a secondary domain name (MySecondDomain.net) that my Exchange Server allows emails from. It wasn't a physical domain, just accepted by exchange and setup as the Active Directory user's main smtp and outgoing address. Its MX records point to MyMainDomain.com's public exchange address. I've taken MySecondDomain.net and move the mail boxes to a hosted exchange 2010 environment. MX records now point to this new exchange system and when I send and email OUTSIDE the MyMainDomain.com environment (say gmail) it works and sends to the hosted exchange setup for MySecondDomain.net. however when I send an email from a user on MyMainDomain.com, it goes to the old exchange 2007 server I am hosting internally. I have removed MySecondDomain.net from the allowed domains, removed the DNS zone for MySecondDomain.net, and cleared DNS cache. I was convinced it was my internal dns server but I've cleared the DNS cache. Is there something I'm missing somewhere in exchange 2007? Or is it my domain controller/dns? Sorry if this is confusing. Thank you!

    Read the article

  • Why does cpuinfo report that my frequency is slower?

    - by Avery Chan
    My machine is running off of a AMD Sempron(tm) X2 190 Processor. According the marketing copy, it should be running at around 2.5 Ghz. Why is the cpu speed being reported as something lower? Spec description (in Chinese) $ cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 16 model : 6 model name : AMD Sempron(tm) X2 190 Processor stepping : 3 microcode : 0x10000c8 cpu MHz : 800.000 cache size : 512 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save bogomips : 5022.89 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm stc 100mhzsteps hwpstate processor : 1 vendor_id : AuthenticAMD cpu family : 16 model : 6 model name : AMD Sempron(tm) X2 190 Processor stepping : 3 microcode : 0x10000c8 cpu MHz : 800.000 cache size : 512 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save bogomips : 5022.82 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm stc 100mhzsteps hwpstate

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Pentium 4 Willamette vs. Faster Celeron Northwood [closed]

    - by Synetech inc.
    Which is the preferable of the following two processors? Intel® Pentium® 4 Processor 1.70 GHz, 256K Cache, 400 MHz FSB Willamette Intel® Celeron® Processor 2.40 GHz, 128K Cache, 400 MHz FSB Northwood Details: A few months ago my motherboard died, so I bought a used computer that had a 2.4GHz Celeron. My old system had a 1.7GHz Pentium 4, so now I’m trying to decide which CPU to use. Obviously a P4 is preferable over a Celeron, but the Celeron is (significantly?) faster than the P4. I’m wondering if the faster Celeron might be better for certain tasks (ie, stronger but dumber is better at some things than smarter but weaker). I tried Googling for some reviews and comparisons for graphs to get a clear depiction of which is better overall, but found nothing that helped. (I did manage to find one page that indicates (apparently by poll, not benchmark) that the Celeron is better.) So which CPU should I use? Does anyone know of some graphs that I can use to compare the two? The system is a general-purpose machine for word-processing, Internet, and casual games (not Crysis, but not Solitaire either). It will be running Windows XP. The board is a 478 with 400MHz FSB.

    Read the article

  • Different nginx rules based on referrer

    - by juana
    I'm using WordPress with WP Super Cache. I want visitors who come from Google (That inlcudes all country specific referrers like google.co.in, google.co.uk and etc.) to see uncached contents. There are my nginx rules which are not working the way I want: server { server_name website.com; location / { root /var/www/html/website.com; index index.php; if ($http_referer ~* (www.google.com|www.google.co) ) { rewrite . /index.php break; } if (-f $request_filename) { break; } set $supercache_file ''; set $supercache_uri $request_uri; if ($request_method = POST) { set $supercache_uri ''; } if ($query_string) { set $supercache_uri ''; } if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { set $supercache_uri ''; } if ($supercache_uri ~ ^(.+)$) { set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; } if (-f $document_root$supercache_file) { rewrite ^(.*)$ $supercache_file break; } if (!-e $request_filename) { rewrite . /index.php last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/website.com$fastcgi_script_name; include fastcgi_params; } } What should I do to achieve my goal?

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Apache 403 after configuring varnish

    - by w0rldart
    I just don't know where else to look and what else to do. I keep getting a 403 error on all my vhosts after setting varnish 3.0 Apacher log: [error] [client 127.0.0.1] client denied by server configuration: /etc/apache2/htdocs Headers: http://domain.com/ GET / HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Cookie: __utma=106762181.277908140.1348005089.1354040972.1354058508.6; __utmz=106762181.1348005089.1.1.utmcsr=OTHERDOMAIN.com|utmccn=(referral)|utmcmd=referral|utmcct=/galerias/cocinas Cache-Control: max-age=0 HTTP/1.1 403 Forbidden Vary: Accept-Encoding Content-Encoding: gzip Content-Type: text/html; charset=iso-8859-1 X-Cacheable: YES Content-Length: 223 Accept-Ranges: bytes Date: Sat, 01 Dec 2012 20:35:14 GMT X-Varnish: 1030961813 1030961811 Age: 26 Via: 1.1 varnish Connection: keep-alive X-Cache: HIT ---------------------------------------------------------- /etc/default/varnish: DAEMON_OPTS="-a ip.ip.ip.ip:80 \ -T localhost:6082 \ -f /etc/varnish/main.domain.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" #-s malloc,256m" My vcl file: http://pastebin.com/axJ57kD8 So, any ideas what I could be missing? Update Just so you know, ports: NameVirtualHost *:8000 Listen 8000 and <VirtualHost 205.13.12.12:8000>

    Read the article

  • Poor Write Performance in VM inside Proxmox PVE 2.0

    - by sorsenne
    I am running a PVE 2.0 on a decent Hardware (2 SATA HDDs as RAID1, 12GB RAM, i7 CPU) but the I/O Performance is very poor inside the VM (Ubuntu 11.10 Server). The very same VM was copied to another Server running simply Ubuntu Server with KVM and had better I/O Perf. this is how the HDD is shown in the Guest: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata1.00: ATA-8: ST3000DM001-9YN166, CC49, max UDMA/133 ata1.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA ata1.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA ST3000DM001-9YN1 CC49 PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 0:0:0:0: [sda] 4096-byte physical blocks sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA I tested with DD: $ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 19.2222 s, 7.0 MB/s on the Host, this same Test will result with 156 MB/s in average. PS: I am using VirtIO and see no error in dmesg.

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • How Do I Secure WordPress Blogs Against Elemento_pcx Exploit?

    - by Volomike
    I have a client who has several WordPress 2.9.2 blogs that he hosts. They are getting a deface kind of hack with the Elemento_pcx exploit somehow. It drops these files in the root folder of the blog: -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.php -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.asp -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.aspx -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.html -rwxr-xr-x 1 userx userx 1459 Apr 16 04:25 index.php* It overwrites index.php. A keyword inside each file is "Elemento_pcx". It shows a white fist with a black background and the phrase "HACKED" in bold letters above it. We cannot determine how it gets in to do what it does. The wp-admin password isn't hard, but it's also not very easy either. I'll change it up a little to show you what the password sort of looks like: wviking10. Do you think it's using an engine to crack the password? If so, how come our server logs aren't flooded with wp-admin requests as it runs down a random password list? The wp-content folder has no changes inside it, but is run as chmod 777 because wp-cache required it. Also, the wp-content/cache folder is run as chmod 777 too.

    Read the article

  • How to eliminate the downtime when a dynamic IP address changes?

    - by xenon
    We currently have a number of client computers linked up to a database server (MS SQL 2008) for replication. The database server recognises the computers based on their Windows hostname. We are using dynamic IP addresses at this time because we tend to change the computers’ hardware quite frequently, and so the MAC address may be different. Unless static IP has a good way for us to manage frequent changing of MAC addresses, we are keeping it to dynamic IP. The problem with dynamic IP addresses, however, is that when a client fetches an new IP from the DHCP, ie, there is a change in the IP address, there is going to have a downtime for the hostname to reflect the new IP address, the client’s DNS cache of the hostname to reload, and also the server’s DNS cache to reload to see the new IP from the hostname. All of these have different timings and the delay can be really bad at times. Restarting the computer doesn't work all the time too. The clients are on Windows 7. How can I eliminate the amount of downtime required when there is a change in IP in the case of dynamic IP addresses?

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Why is this APC installation failing so badly?

    - by Matt
    I have multiple instances of APC running on my server with similar configurations (albeit with different cache sizes. However, one of the instances is performing extremely poorly, and I have no idea why (100% cache fragmentation, high miss rate). The runtime settings I'm using are as follows (pretty much out of the box): apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 10M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 APC is version 3.1.6, PHP is 5.3.3-1ubuntu9.5. I've tried restarting Apache multiple times, so this isn't a freak instance. The instance with problems is simply running Wordpress with a few plugins installed. All other instances (~4) on the server are running perfectly fine with almost 100% hit rates and 0% fragmentation; for example this instance is holding a website built using the Symfony framework. Any help would be much appreciated; I haven't had much experience with APC and was hoping for it to be an out-of-the-box speed boost ;).

    Read the article

  • List symlinks in specific relative directories

    - by Clinton Blackmore
    I have a server that shares out user home folders over the network. Each user has a Cache folder. Sometimes a symlink is used to redirect this folder to the hard drive of whichever machine they are using (and sometimes that doesn't work and they have a broken symlink [which is a matter for another day].) I'm trying to find out which users have symlinks and which don't. Within the shared folder, to get to the Cache folder you would substitute folders like so: $GRADE/$USERNAME/Library/Caches Right now I'm searching to see which users have symlinks and which do not. I've come up with: cd /path/to/shared/home/folders sudo find . -name "Caches" -exec ls -ld {} \; and get results like this: lrwxr-xr-x@ 1 name0 ES_Students 27 Jan 18 11:05 ./CES_Grade_03/name0/Library/Caches -> /tmp/name0/Library/Caches drwx------ 11 name1 ES_Students 374 Dec 8 15:44 ./CES_Grade_03/name1/Library/Caches lrwxr-xr-x@ 1 name2 ES_Students 27 Feb 23 14:27 ./CES_Grade_03/name2/Library/Caches -> /tmp/name2/Library/Caches drwx------ 17 name3 ES_Students 578 Jan 25 11:13 ./CES_Grade_03/name3/Library/Caches drwx------ 12 name4 ES_Students 408 Mar 22 13:09 ./CES_Grade_03/name4/Library/Caches but it nags at me that there must be a better way. Yes, it is good enough, and a one-off task, but I want to know how to do it right! Surely, I should be able to do something like: cd /path/to/shared/home/folders sudo ls -ld **/**/Library/Caches I'm afraid that I don't know the proper syntax or if there is a recursive folder-replacing wildcard format in bash, and my google-fu failed me. So, how do I properly formulate the search?

    Read the article

  • Squid proxy in cent os often disconnected with error : tunnelConnectTimeout(): tunnelState->servers is NULL

    - by Ela
    I am having very often internet disconnection problem with Squid proxy service. My server config; OS: CentOS release 6.3 (Final) model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz cpu MHz : 1600.000 My Local systems IP range:192.168.2.x Server IP: 192.168.2.11 Also this server is configured with lamp for development,Samba SMB file service manager and No svn currently. So i see maximum possibility is this squid proxy since this is where it stops to connect and am sure when i restart the server net started working so something wrong with this squid service only. And this server is connected with local 14 other windows machines and basically serves as a central development node. I am able to resolve it by restarting the server fully some time or sometimes by restarting the squid proxy which is totally killing our development. I have attached my cache log file here for the error info. Cache log file Sample error log: 2013/07/01 13:25:38| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:41| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:41| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:50| clientProcessRequest: Invalid Request 2013/07/01 13:26:05| tunnelConnectTimeout(): tunnelState->servers is NULL Some help can make our lives easier, Thanks in advance.

    Read the article

  • Connection reset by peer: mod_fcgid: error reading data from FastCGI server Issues

    - by user145857
    Help is greatly needed for our server. We are experiencing random "Connection reset by peer: mod_fcgid: error reading data from FastCGI server" errors which cause a 500 internal server error. If the page is then reloaded it loads normally as it should. We are running MPM Worker with mod FCGID to handle PHP. We had APC cache enabled but disabled it recently to see if it would fix the problem, but the random mod FCGID errors are still continuing. No other opcode cache is active now. Our settings are below: <IfModule worker.c> MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 ThreadLimit 100 ServerLimit 700 MaxClients 700 MaxRequestsPerChild 0 </IfModule> <IfModule mod_fcgid.c> FcgidMaxRequestLen 1073741824 FcgidMaxRequestsPerProcess 2000 FcgidMaxProcessesPerClass 100 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 300 FcgidIOTimeout 900 FcgidFixPathinfo 1 FcgidIdleTimeout 300 FcgidIdleScanInterval 120 FcgidBusyTimeout 300 FcgidBusyScanInterval 120 FcgidErrorScanInterval 12 FcgidZombieScanInterval 12 FcgidProcessLifeTime 3600 </IfModule> The server is a 64 core 2.1 GHZ 94 GB RAM so it has some power. Some of the fcgid timeout settings are higher because we run large reports which take up to 15 minutes. Any help is greatly appreciated! Just to clarify, the random fcgid errors are occurring when a user clicks a page on our site and the 500 error page loads instantly. This is random and occurrs less than 1% of the time but it is still an issue.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >