Search Results

Search found 5374 results on 215 pages for 'dotnetnuke hosting'.

Page 207/215 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • AMD 24 core server memory bandwidth

    - by ntherning
    I need some help to determine whether the memory bandwidth I'm seeing under Linux on my server is normal or not. Here's the server spec: HP ProLiant DL165 G7 2x AMD Opteron 6164 HE 12-Core 40 GB RAM (10 x 4GB DDR1333) Debian 6.0 Using mbw on this server I get the following numbers: foo1:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.58047 MiB: 1024.00000 Copy: 1764.082 MiB/s 1 Method: MEMCPY Elapsed: 0.58012 MiB: 1024.00000 Copy: 1765.152 MiB/s 2 Method: MEMCPY Elapsed: 0.58010 MiB: 1024.00000 Copy: 1765.201 MiB/s AVG Method: MEMCPY Elapsed: 0.58023 MiB: 1024.00000 Copy: 1764.811 MiB/s 0 Method: DUMB Elapsed: 0.36174 MiB: 1024.00000 Copy: 2830.778 MiB/s 1 Method: DUMB Elapsed: 0.35869 MiB: 1024.00000 Copy: 2854.817 MiB/s 2 Method: DUMB Elapsed: 0.35848 MiB: 1024.00000 Copy: 2856.481 MiB/s AVG Method: DUMB Elapsed: 0.35964 MiB: 1024.00000 Copy: 2847.310 MiB/s 0 Method: MCBLOCK Elapsed: 0.23546 MiB: 1024.00000 Copy: 4348.860 MiB/s 1 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.230 MiB/s 2 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.359 MiB/s AVG Method: MCBLOCK Elapsed: 0.23545 MiB: 1024.00000 Copy: 4349.149 MiB/s On one of my other servers (based on Intel Xeon E3-1270): foo2:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.18960 MiB: 1024.00000 Copy: 5400.901 MiB/s 1 Method: MEMCPY Elapsed: 0.18922 MiB: 1024.00000 Copy: 5411.690 MiB/s 2 Method: MEMCPY Elapsed: 0.18944 MiB: 1024.00000 Copy: 5405.491 MiB/s AVG Method: MEMCPY Elapsed: 0.18942 MiB: 1024.00000 Copy: 5406.024 MiB/s 0 Method: DUMB Elapsed: 0.14838 MiB: 1024.00000 Copy: 6901.200 MiB/s 1 Method: DUMB Elapsed: 0.14818 MiB: 1024.00000 Copy: 6910.561 MiB/s 2 Method: DUMB Elapsed: 0.14820 MiB: 1024.00000 Copy: 6909.628 MiB/s AVG Method: DUMB Elapsed: 0.14825 MiB: 1024.00000 Copy: 6907.127 MiB/s 0 Method: MCBLOCK Elapsed: 0.04362 MiB: 1024.00000 Copy: 23477.623 MiB/s 1 Method: MCBLOCK Elapsed: 0.04262 MiB: 1024.00000 Copy: 24025.151 MiB/s 2 Method: MCBLOCK Elapsed: 0.04258 MiB: 1024.00000 Copy: 24048.849 MiB/s AVG Method: MCBLOCK Elapsed: 0.04294 MiB: 1024.00000 Copy: 23847.599 MiB/s For reference here's what I get on my Intel based laptop: laptop:~$ mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.40566 MiB: 1024.00000 Copy: 2524.269 MiB/s 1 Method: MEMCPY Elapsed: 0.38458 MiB: 1024.00000 Copy: 2662.638 MiB/s 2 Method: MEMCPY Elapsed: 0.38876 MiB: 1024.00000 Copy: 2634.043 MiB/s AVG Method: MEMCPY Elapsed: 0.39300 MiB: 1024.00000 Copy: 2605.600 MiB/s 0 Method: DUMB Elapsed: 0.30707 MiB: 1024.00000 Copy: 3334.745 MiB/s 1 Method: DUMB Elapsed: 0.30425 MiB: 1024.00000 Copy: 3365.653 MiB/s 2 Method: DUMB Elapsed: 0.30342 MiB: 1024.00000 Copy: 3374.849 MiB/s AVG Method: DUMB Elapsed: 0.30491 MiB: 1024.00000 Copy: 3358.328 MiB/s 0 Method: MCBLOCK Elapsed: 0.07875 MiB: 1024.00000 Copy: 13003.670 MiB/s 1 Method: MCBLOCK Elapsed: 0.08374 MiB: 1024.00000 Copy: 12228.034 MiB/s 2 Method: MCBLOCK Elapsed: 0.07635 MiB: 1024.00000 Copy: 13411.216 MiB/s AVG Method: MCBLOCK Elapsed: 0.07961 MiB: 1024.00000 Copy: 12862.006 MiB/s So according to mbw my laptop is 3 times faster than the server!!! Please help me explain this. I've also tried to mount a ram disk and use dd to benchmark it and I get similar differences so I don't think mbw is to blame. I've checked the BIOS settings and the memory seem to be running at full speed. According to the hosting company the modules are all OK. Could this have something to do with NUMA? It seems like Node Interleaving is disabled on this server. Will enabling it (thus turning off NUMA) make a difference? foo1:~# numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 node 0 size: 8190 MB node 0 free: 7898 MB node 1 cpus: 6 7 8 9 10 11 node 1 size: 12288 MB node 1 free: 12073 MB node 2 cpus: 18 19 20 21 22 23 node 2 size: 12288 MB node 2 free: 12034 MB node 3 cpus: 12 13 14 15 16 17 node 3 size: 8192 MB node 3 free: 8032 MB node distances: node 0 1 2 3 0: 10 20 20 20 1: 20 10 20 20 2: 20 20 10 20 3: 20 20 20 10

    Read the article

  • mod_mono 'Service Temporarily Unavailable' issue

    - by Charlie Somerville
    I've deployed an ASP.NET web application on a Linux (Debian) server running Apache 2.2 and mod_mono 1.9 It's working well, however Mono occasionally segfaults and uses the entire CPU which causes the website to stop working and display 'Service Temporarily Unavailable' Killing mono fixes it, but obviously this isn't a good solution. I tailed the system log after this happened and I saw the following error messages from the kernel: Apr 20 01:49:37 charliesomerville kernel: [1596436.204158] mono[17909]: segfault at b645f671 ip b645f671 sp b4ffb604 error 4<6>mono[19047]: segfault at b645f66e ip b645f66e sp b4bf7604 error 4<6>mono[18017]: segfault at b645f66e ip b645f66e sp b52fe604 error 4<6>mono[19668]: segfault at b645f5e6 ip b645f5e6 sp b48f4604 error 4<6>mono[22565]: segfault at b645f674 ip b645f674 sp b45f1604 error 4<6>mono[17700]: segfault at b645f661 ip b645f661 sp b51fd604 error 4<6>mono[19596]: segfault at b645f5e6 ip b645f5e6 sp b49f5604 error 4 Apr 20 01:49:37 charliesomerville kernel: [1596436.208172] mono[23219]: segfault at b645f66e ip b645f66e sp b44f0604 error 4 At the end of Apache's error.log are the following errors: [Tue Apr 20 03:10:23 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 System.ArgumentNullException: null key Parameter name: key at System.Collections.Hashtable.get_Item (System.Object key) [0x00000] at System.Runtime.Serialization.SerializationCallbacks.GetSerializationCallbacks (System.Type t) [0x00000] at System.Runtime.Serialization.ObjectManager.RaiseOnDeserializingEvent (System.Object obj) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectContent (System.IO.BinaryReader reader, System.Runtime.Serialization.Formatters.Binary.TypeMetadata metadata, Int64 objectId, System.Object& objectInstance, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] at System.Runtime.Remoting.Channels.CADSerializer.DeserializeObject (System.IO.MemoryStream mem) [0x00000] at System.Runtime.Remoting.RemotingServices.GetDomainProxy (System.AppDomain domain) [0x00000] at System.AppDomain.CreateDomain (System.String friendlyName, System.Security.Policy.Evidence securityInfo, System.AppDomainSetup info) [0x00000] at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] at Mono.WebServer.ApplicationServer.GetApplicationForPath (System.String vhost, Int32 port, System.String path, Boolean defaultToRoot) [0x00000] at (wrapper remoting-invoke-with-check) Mono.WebServer.ApplicationServer:GetApplicationForPath (string,int,string,bool) at Mono.WebServer.ModMonoWorker.GetOrCreateApplication (System.String vhost, Int32 port, System.String filepath, System.String virt) [0x00000] at Mono.WebServer.ModMonoWorker.InnerRun (System.Object state) [0x00000] at Mono.WebServer.ModMonoWorker.Run (System.Object state) [0x00000] [Tue Apr 20 03:10:26 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:26 2010] [error] Command stream corrupted, last command was -1 Along with the above errors, Apache's error.log is packed with hundreds (if not thousands) of the following error: Maximum number (20) of concurrent mod_mono requests to /tmp/mod_mono_dashboard_default_2.lock reached. Droping request. At the moment, I'm thinking there might be something wrong with configuration here (it's basically running on out-of-the-box config)

    Read the article

  • after enabling mod ssl apache stops listening on port 80

    - by zensys
    I have an ubuntu 12.04 server with zend server CE installed. I now wanted to enable https but after the first steps according to the documentation, 'a2enmod ssl' and 'apache service restart', apache does not listen on 443 but neither on 80, according to netstat -tap | grep http(s)! This is what I see in my error log, but I can't make much of it: [Fri May 25 19:52:39 2012] [notice] caught SIGTERM, shutting down [Fri May 25 19:52:41 2012] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Fri May 25 19:52:41 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:52:41 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:52:41 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:52:41 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:11 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:53:11 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:53:11 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:53:11 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:12 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.8-ZS5.5.0 configured -- resuming normal operations and here is my httpd.conf: # Name based virtual hosting <virtualhost *:80> ServerName www-redirect KeepAlive Off RewriteEngine On RewriteCond %{HTTP_HOST} ^[^\./]+\.[^\./]+$ RewriteRule ^/(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </virtualhost> Alias /shared/js "/home/web/library/js" Alias /shared/image "/home/web/library/image" <IfModule mod_expires.c> <FilesMatch "\.(jpe?g|png|gif|js|css|doc|rtf|xls|pdf)$"> ExpiresActive On ExpiresDefault "access plus 1 week" </FilesMatch> </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Location /> RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] </Location> netstat -tap gives: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:mysql *:* LISTEN 765/mysqld tcp 0 0 *:pop3 *:* LISTEN 744/dovecot tcp 0 0 *:imap2 *:* LISTEN 744/dovecot tcp 0 0 *:http *:* LISTEN 19861/apache2 tcp 0 0 *:smtp *:* LISTEN 30365/master tcp 0 0 *:4444 *:* LISTEN 634/sshd tcp 0 0 *:kamanda *:* LISTEN 1167/lighttpd tcp 0 0 *:imaps *:* LISTEN 744/dovecot tcp 0 0 *:amandaidx *:* LISTEN 1167/lighttpd tcp 0 0 localhost.loc:amidxtape *:* LISTEN 19861/apache2 tcp 0 0 *:pop3s *:* LISTEN 744/dovecot tcp 0 384 mail.mysite.:4444 231.214.14.37.dyn:41909 ESTABLISHED 19039/sshd: web [pr tcp 0 0 localhost.localdo:mysql localhost.localdo:48252 ESTABLISHED 765/mysqld tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54686 TIME_WAIT - tcp 0 0 mail.mysite.:4444 231.214.14.37.dyn:42419 ESTABLISHED 19372/sshd: web [pr tcp 0 0 localhost.localdo:48252 localhost.localdo:mysql ESTABLISHED 19884/auth tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54685 TIME_WAIT - tcp6 0 0 [::]:pop3 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:imap2 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:smtp [::]:* LISTEN 30365/master tcp6 0 0 [::]:4444 [::]:* LISTEN 634/sshd tcp6 0 0 [::]:imaps [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:pop3s [::]:* LISTEN 744/dovecot Anyone knows what I am doing wrong? Perhaps I should take some additional steps to make apache listen 0n 443 but that it stops listening on 80 altogether I can't understand.

    Read the article

  • Troubleshooting High Load on Plesk LAMP Dedicated Server

    - by Callmeed
    I have 2 nearly identical dedicated servers with the same provider. They also run a nearly identical software stack: RedHat 5 64-bit, Plesk, PHP, Apache, & MySQL. We use them for hosting custom sites we build. The problem is, while our 1st server has a load average (in top) of around 0.3, the 2nd server consistently has a load average of around 4.0 or higher. Basic functions in Plesk are delayed and there is a bit of latency when executing shell commands. Anyone have ideas why it would be so high? And why it would differ from our other server so much? Here is my current top output (sorted by %MEM) ... Any help is much appreciated ... top - 21:48:04 up 100 days, 4:28, 1 user, load average: 3.74, 4.20, 4.23 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.4%sy, 0.0%ni, 91.3%id, 7.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12290884k total, 11886452k used, 404432k free, 2920212k buffers Swap: 2096472k total, 244k used, 2096228k free, 6560692k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22536 apache 15 0 860m 547m 6484 S 0.0 4.6 0:10.96 httpd 26467 apache 15 0 859m 546m 6408 S 0.0 4.5 0:07.67 httpd 3620 apache 15 0 859m 545m 5552 S 0.0 4.5 0:06.15 httpd 1895 apache 15 0 858m 544m 6356 S 0.0 4.5 0:08.25 httpd 16933 apache 15 0 858m 544m 5488 S 0.0 4.5 0:01.57 httpd 6431 apache 15 0 856m 542m 6076 S 10.6 4.5 0:05.32 httpd 14417 apache 15 0 856m 542m 5568 S 0.0 4.5 0:03.88 httpd 15403 apache 15 0 855m 541m 5616 S 0.0 4.5 0:03.73 httpd 19165 apache 15 0 853m 539m 6252 S 0.0 4.5 0:12.40 httpd 15898 apache 15 0 852m 539m 5376 S 0.0 4.5 0:02.68 httpd 14401 apache 15 0 851m 538m 5460 S 0.0 4.5 0:02.97 httpd 15393 apache 15 0 851m 538m 5404 S 0.0 4.5 0:03.12 httpd 15427 apache 15 0 851m 538m 5496 S 0.0 4.5 0:02.44 httpd 14412 apache 15 0 851m 538m 5324 S 0.0 4.5 0:02.15 httpd 18330 apache 15 0 851m 537m 5136 S 0.0 4.5 0:01.30 httpd 18303 apache 15 0 848m 535m 5140 S 0.0 4.5 0:00.47 httpd 21190 apache 15 0 845m 533m 3988 S 0.0 4.4 0:00.33 httpd 15923 root 18 0 822m 521m 9928 S 0.0 4.3 10:04.81 httpd 22021 apache 15 0 828m 520m 4964 S 0.0 4.3 0:00.16 httpd 22146 apache 15 0 823m 515m 3016 S 0.0 4.3 0:00.02 httpd 22345 apache 15 0 822m 514m 2408 S 0.0 4.3 0:00.00 httpd 14721 apache 15 0 733m 510m 488 S 0.0 4.3 0:00.00 httpd 5094 root 15 0 1452m 122m 15m S 1.0 1.0 852:24.24 java 4636 mysql 15 0 532m 57m 6440 S 1.0 0.5 488:05.84 mysqld 4799 popuser 15 0 166m 53m 2368 S 0.0 0.4 0:36.64 spamd 16761 popuser 15 0 159m 46m 2312 S 0.0 0.4 0:00.38 spamd 4797 root 15 0 158m 45m 2448 S 0.0 0.4 0:01.27 spamd 5074 root 34 19 255m 20m 2144 S 0.0 0.2 1:37.53 yum-updatesd 9917 named 15 0 366m 9804 1980 S 0.0 0.1 0:10.26 named 4332 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.06 sw-engine-cgi 4341 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.07 sw-engine-cgi 4350 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.09 sw-engine-cgi 4352 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.11 sw-engine-cgi 4376 ntp 15 0 23388 5020 3896 S 0.0 0.0 0:00.58 ntpd 4331 sw-cp-se 15 0 61336 4572 1480 S 0.0 0.0 5:53.22 sw-cp-serverd 4213 haldaemo 15 0 31252 4460 1684 S 0.0 0.0 0:01.52 hald 4778 postgres 18 0 117m 4164 3484 S 0.0 0.0 0:00.11 postmaster 18555 root 16 0 98.3m 3716 2852 S 0.0 0.0 0:00.01 sshd 4488 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4489 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4492 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4493 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4490 sso 18 0 119m 3040 220 S 0.0 0.0 0:00.00 sw-engine-cgi

    Read the article

  • How to restore Linode to Vagrant VM?

    - by Iain Elder
    I'm trying to set up a Linux development environment so I can safely make changes to my website without breaking the live site. Linode hosts my live site. A simple solution would be to host my development server on Linode as well, but I want to avoid doubling my hosting costs. The cheapest way I see is to use Vagrant on my Windows workstation to host my development environment. After I attempt to restore the backup to Vagrant and reboot the VM, I can no longer ssh into the Vagrant host. It's probably because by restoring the backup I overwrite some special Vagrant configuration, but I'm not sure how to avoid that. How do I make this approach work? If my approach is fundamentally wrong, can you suggest an alternative? Creating the backup On the Linode I used these commands to create a compressed copy of the entire filesystem, while ignoring things that shouldn't be included in the backup: $ sudo rsync -ahvz --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/backup/*} /* /backup/2 $ sudo tar -czf /backup/2.gz /backup/2 The backup file is called 2.gz because this is thesecond backup. The first backup is called 1.gz. I use WinSCP to copy the backup file to my Windows workstation. Setting up the Vagrant host I need a Vagrant box that matches my Linode operating system (Ubuntu 12.04.3 LTS, kernel 3.9.3). I selected the closet match from vagrantbox.es: Ubuntu Server Precise 12.04.3 amd64 Kernel is ready for Docker (Docker not included) On my workstation I ran these commands to add the box and initialize and boot an instance: $ vagrant box add ubuntu-precise http://nitron-vagrant.s3-website-us-east-1.amazonaws.com/vagrant_ubuntu_12.04.3_amd64_virtualbox.box $ mkdir linode-test $ cd linode-test $ vagrant init ubuntu-precise $ vagrant up Now Vagrant is running a machine with SSH on port 2222. The operating system version is the same. The kernel version is 3.8.0. Sounds close enough. Restoring the backup With WinSCP I copied the backup file 2.gz to /home/vagrant/2.gz on the Vagrant box. With PuTTY I connected via ssh to my new Vagrant box: On the box move the backup to the filesystem root. $ sudo mv 2.gz / Extract the archive to the filesystem root: $ sudo tar -xvpz -f 2.gz -C / --strip-components=2 (I discovered I need to use strip components because all files in the archive have the prefix backup/2/. I'll fix this for the next backup.) After the tar command completes, I log out of the box. Testing the backup When I try to log in again, it doesn't let me log in as vagrant with a password any more. It does let me log in as iain, my user on the live Linode, with a password. That surprised me because I disabled password authentication on my live Linode. I figured that I have to restart the ssh service for the change to take effect. Instead of restarting just ssh, I chose to restart the whole system. Now I can't even get to the login screen. PuTTY says "connection refused" when I try to connect. What went wrong?

    Read the article

  • ServerAlias not working

    - by Janis Peisenieks
    I have a VPS, that I have configured to host multiple websites with name based hosting. It is all good while only using example.com, and www.example.com. It also works with example.net, but when I try example.net, it reverts to my default site configuration, which just shows my default (empty) index.html page. Here's the default file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Here's a configuration for the example.com site: <VirtualHost *:80> ServerAdmin [email protected] ServerName example.com ServerAlias www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined <Directory /srv/www/example.com/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here is the config for the example.net site: <VirtualHost *:80> ServerAdmin [email protected] ServerName example.net ServerAlias www.example.net DocumentRoot /srv/www/example.net/public_html/ ErrorLog /srv/www/example.net/logs/error.log CustomLog /srv/www/example.net/logs/access.log combined <Directory /srv/www/example.net/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Where could the problem be? I believe, that there is something going wrong with the ServerAlias property. Could it be because of the way the site's are built? Because example.com is a Joomla site, and example.net is a Zend Framework site. Just in case, I'll also insert the .htaccess files for example.net, since example.com has it's disabled: example.net: SetEnv APPLICATION_ENV development RewriteRule ^(browse|config).* - [L] ErrorDocument 500 /error-docs/500.shtml SetEnv CACHE_OFFSET 2678400 <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Fri, 25 Sep 2037 19:30:32 GMT" Header unset ETag FileETag None </FilesMatch> RewriteEngine On RewriteRule ^(adm|statistics) - [L] RewriteRule ^/public/(.*)$ http://example.net/$1 [R] RewriteRule ^(.*)$ public/$1 [L] Any help would be greatly appreciated! Edit So that my question is ABSOLUTELY clear: The problem is, that one site works with both www prefix as well as without it, and the second one does not. I would like to know how to enable the second site to work with www prefix as well.

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • How Hacker Can Access VPS CentOS 6 content?

    - by user2118559
    Just want to understand. Please, correct mistakes and write advices Hacker can access to VPS: 1. Through (using) console terminal, for example, using PuTTY. To access, hacker need to know port number, username and password. Port number hacker can know scanning open ports and try to login. The only way to login as I understand need to know username and password. To block (make more difficult) port scanning, need to use iptables configure /etc/sysconfig/iptables. I followed this https://www.digitalocean.com/community/articles/how-to-setup-a-basic-ip-tables-configuration-on-centos-6 tutorial and got *nat :PREROUTING ACCEPT [87:4524] :POSTROUTING ACCEPT [77:4713] :OUTPUT ACCEPT [77:4713] COMMIT *mangle :PREROUTING ACCEPT [2358:200388] :INPUT ACCEPT [2358:200388] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2638:477779] :POSTROUTING ACCEPT [2638:477779] COMMIT *filter :INPUT DROP [1:40] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [339:56132] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 110 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 21 -j ACCEPT COMMIT Regarding ports that need to be opened. If does not use ssl, then seems must leave open port 80 for website. Then for ssh (default 22) and for ftp (default 21). And set ip address, from which can connect. So if hacker uses other ip address, he can not access even knowing username and password? Regarding emails not sure. If I send email, using Gmail (Send mail as: (Use Gmail to send from your other email addresses)), then port 25 not necessary. For incoming emails at dynadot.com I use Email Forwarding. Does it mean that emails “does not arrive to VPS” (before arriving to VPS, emails are forwarded, for example to Gmail)? If emails does not arrive to VPS, then seems port 110 also not necessary. If use only ssl, must open port 443 and close port 80. Do not understand regarding port 3306 In PuTTY with /bin/netstat -lnp see Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 992/mysqld As understand it is for mysql. But does not remember that I have opened such port (may be when installed mysql, the port is opened automatically?). Mysql is installed on the same server, where all other content. Need to understand regarding port 3306 2. Also hacker may be able access console terminal through VPS hosting provider Control Panel (serial console emergency access). As understand only using console terminal (PuTTY, etc.) can make “global” changes (changes that can not modify with ftp). 3. Hacker can access to my VPS exploiting some hole in my php code and uploading, for example, Trojan. Unfortunately, faced situation that VPS was hacked. As understand it was because I used ZPanel. On VPS ( \etc\zpanel\panel\bin) ) found one php file, that was identified as Trojan by some virus scanners (at virustotal.com). Experimented with the file on local computer (wamp). And appears that hacker can see all content of VPS, rename, delete, upload etc. From my opinion, if in PuTTY use command like chattr +i /etc/php.ini then hacker could not be able to modify php.ini. Is there any other way to get into VPS?

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • Thoughts on home NAS server

    - by user826955
    I currently have a NAS with a 2x2TB HDD 1x16GB SSD layout on a mini-itx atom board. The NAS is in a Lian Li PC-Q07 case. On this system I was running freebsd 8 with a gmirror raid 1 setup, which was enough for my needs. So far I was using the NAS for: Fileserver with AFP protocol (only mac clients used) SVN server hosting all my source trees of my projects JIRA (performance was okay-ish) Timemachine backup for the macs The power consumption was about 38W, although I did not put HDDs asleep when unused (I think this is not possible in a raid setup). I liked the NAS because: the performance was good through gigabit LAN (enough for my needs) power consumption was good its a pretty small case and fits in one of my cupboards I disliked the NAS a bit because: it was a bit noisy, the Q07 case vibrated a good amount because of the HDDs. I switched the NAS off every evening I do not have a real backup of the data on the NAS, only the internal raid 1 as safety. I really dont want to loose my source trees under no circumstances, so I would really be sleeping better if I knew I had regular backups somewhere. Recently, the board seemed to have died, I can't boot anymore. Thus, I was thinking about a redesign of my NAS (I still have to find out what parts are broken, I probably need to replace the mainboard and SSD. HDDs seem to be okay). First of all, I was wondering what other users have as backup for their NAS? Are you actually using a second NAS, and regularly copying over the data to have it safe? Or is there any better solution to this? I was thinking about getting a cheap NAS like the synology DS112j with only one disk, and use rsync or something similar to regularly copy data over to the second NAS (wake the second NAS upon start, shut it down after copy). Although this approach seems somewhat weird, It would have the benefit (?) that I could use a single disk instead of raid in the main NAS, and put the disk asleep when idle, and have the NAS running 24/7 with low energy consumption (I found no way to do this with a gmirror setup). Is there any recommended backup solution for a small NAS? Then I was thinking about a different raid setup. Since I have to buy a new mainboard as well as SSD, I might as well switch over to a i3 board with more ram, and also switch to ZFS. I am not familar with ZFS, I've never used it, but I read and hear much about it. Would it be viable to set up a ZFS storage with only 2 disks? Can I easily extend this storage with more disks, once I choose to add some? I could maybe get a new case like the Fractal Design Array R2 which has more 3,5" slots. I could as well get another 2 disks, but I would prefer sticking with the existing 2 for enegery/heat/noise reasons. Should I go for a ZFS storage or stick to my gmirror setup? I would also like to keep freebsd as operating system, and also I dont need any web gui or something (that is, I dont need/want to use FreeNAS or Openfiler etc). Does anyone maybe have a sample setup in use so I can compare energy consumption/noise/software setup? Any guidance towards the NAS of my dreams (silent, low energy, safe w/ backups) much appreciated.

    Read the article

  • Wear and tear on server hard drive from filesystem polling by PHP script

    - by jackie
    So I'm working on a discussion platform, and various clients will visit http://host/thread.php, which will render the discussion thread to date in addition to a form to submit a new post. When a new post is submitted, I would like all of the other clients with browser windows open to have it appear in near-real-time. One of the constraints of my script is that it may not use a DBMS and it must stay in the filesystem. Additionally, I can't use any PECL/PEAR extensions like inotify or anything like that for IPC. The flow will look like this: Client A requests thread.php and the thread is so far empty, but nonetheless it opens a Server-Side Event at eventPusher.php. Client B does the same. Client A fills out a post in the form and and submits (POSTs) it to subHandler.php. ??? (subHandler stores the new submission into the main thread storefile which gets read from when a fresh, new client requests thread.php, in addition to somehow signalling to the continually-running eventPusher event-source that a new comment was posted and that it should echo the event-json to the client. How, exactly, it will send this signal I'm yet unsure of, but there are a few options that I've thought of -- this is the crux of the question, so see below for more clarification) eventPusher.php happily pushes the new event to the client and it shows up soon after it was originally submitted on all clients who have the page open's screens. Now for the #4 missing-link mystery-step, I see a few problems. I mean, either way, eventPusher is gonna be doing a while loop of some sort -- it's gonna be polling something, I think that much is clear. (If that's a bad assumption please do let me know.) Now, the simplest way would be subHandler gets invoked on the form submission, writes it to the main store in addition to newComments.xml, then exits without doing anything else. Then eventPusher checks in newComments.xml every X seconds (by the way, what would be a reasonable time interval here?) and if it finds something then it emits an event to the client. Now, my fear with this is that the server's hard drive will have to constantly start spinning up. Maybe this isn't the case, perhaps it would just get cached in RAM and the linux kernel would take care of this transparently such that filesystem access doesn't actually engage the device because the kernel knows that that particular file hasn't changed since last read. * idea #2: I have no idea how to go about this, but perhaps there is a variable scope that gets stored in general RAM on the system which can be read by any process. Like if we mega-exported a bash variable so that $new_post is normally false but it gets toggled to true by subHandler, and then back to flase once it's pushed to the client. I doubt there's such a variable scope in PHP directly, but I struggle with the concept of variable scope, I just can't seem to understand it no matter what I read on it. * idea #3: eventPusher queries ps in its whileloop for another instance of itself. If there's not another eventPusher active then it's highly unlikely that new comments will be getting submitted. It's okay if this only works =90% of the time, it doesn't need to be completely foolproof. * idea #4: eventPusher queries DMESG to see if that file's been written to recently. So to sum everything up, I need to have inter-php-script-communication in near-real-time that will work on a standard mod_php shared hosting setup without any elevated privileges, PHP addon modules, or other system adjustments that can't be done from the PHP script itself at runtime. With*out* spinning up the drive more than a few times. No SQL servers either. Apologies if my english isn't the best, I'm still trying to improve on it.

    Read the article

  • Duplicate GET request from multiple IPs - can anyone explain this?

    - by dwq
    We've seen a pattern in our webserver access logs which we're having problem explaining. A GET request appears in the access log which is a legitimate, but private, url as part of normal e-commerce website use (by private, we mean there is a unique key in a url form variable generated specifically for that customer session). Then a few seconds later we get hit with an identical request maybe 10-15 times within the space of a second. The duplicate requests are all from different IP addresses. The UserAgent for the duplicates are all the same (but different from the original request). The reverse DNS lookup on the IPs for all the duplicates requests resolve to the same large hosting company. Can anyone think of a scenario what would explain this? EDIT 1 Here's an example that's probably anonymised beyond being any actual use, but it might give an idea of the sort of pattern we're seeing (it's from a search query as they sometimes get duplicated too): xx.xx.xx.xx - - [21/Jun/2013:21:42:57 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "http://www.ourdomain.com/index.html" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" UPDATE 2 Sometimes it is part of a checkout flow that's duplicated to I'd think twitter is unlikely.

    Read the article

  • Postfix not sending/allowing receiving of messages after server (hardware) changed

    - by 537mfb
    We had na old notebook runing Ubuntu 12.04 working as a web/ftp/mail server and it worked but since the notebook was a notebook and pretty old and unreliable, a desktop was bought to replace it before it stopped working all together. Due to issues with the new desktop's vídeo card, we couldn't use Ubuntu 12.04 so we installed Ubuntu 13.10 and wen't about configuring it. Since we removed the notebook from the network, we kept the same Computer Name and local IP address to make things as close to the old server as possible configuration-wise. However, something has gone wrong since Postfix is throwing error 451 4.3.0 lookup faillure on every attempt to send a mail, and no email can be received either. Our main.cf file is a copy of the one we were using (and working) on the old server (notice we use EHCP) # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name powered by Easy Hosting Control Panel (ehcp) on Ubuntu, www.ehcp.net biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no myhostname = m21-traducoes.com.pt relayhost = mydestination = localhost, 89.152.248.139 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 89.152.248.0/24 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, proxy:mysql:/etc/postfix/mysql-virtual_email2email.cf transport_maps = proxy:mysql:/etc/postfix/mysql-virtual_transports.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtp_use_tls = yes smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom virtual_create_maildirsize = yes virtual_mailbox_extended = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailbox_limit_maps.cf virtual_mailbox_limit_override = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_overquota_bounce = yes debug_peer_list = sender_canonical_maps = debug_peer_level = 1 proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $mynetworks $virtual_mailbox_limit_maps $transport_maps alias_maps = hash:/etc/aliases smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtpd_destination_concurrency_limit = 2 smtpd_destination_rate_delay = 1s smtpd_extra_recipient_limit = 10 disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 This configuration was working before but now everytime i try to send a mail in squirrelmail it reports: Message not sent. Server replied: Requested action aborted: error in processing 451 4.3.0 <[email protected]>: Temporary lookup failure And i can't send mail to it from outsider either. Any ideas? EDIT: Here are some issues MXToolBox reports to my domain, answering hopefully to @Teun Vink: BlackList Mail Server Web Server DNS Error 4 0 2 0 Warnings 0 0 0 3 Passed 0 6 3 12 So the domain is on some blacklist, but that doesn't explain the error at all No mail server issues found (except it's not working) Those two web server errors it's because i don't have HTTPS workin (No SSL Certificate) so the test fails Those 3 DNS warnings we're already there when it was working with the other machine and are related to stuff i can't control: SOA Refresh Value is outside of the recommended range SOA Expire Value out of recommended range SOA NXDOMAIN Value too high I've searched and as far as i can tell only the guys who sold the retail can change those values and they won't. Edit2: I half solved the issue.on the new machine postfix was installed but postfix-mysql waasn't so he couldn't connect to the database (rookie mistake). After fixing that, i can now send mails to the outsider without any issues, however i am still not able to receive mails from utside. The sender doesn't get any message warning about the non-delivery but the message doesn't fall in the inbox and the log shows: Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: NOQUEUE: reject: RCPT from re lay4.ptmail.sapo.pt[212.55.154.24]: 451 4.3.5 <relay4.ptmail.sapo.pt[212.55.154. 24]>: Client host rejected: Server configuration error; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<sapo.pt> Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: disconnect from relay4.ptmail .sapo.pt[212.55.154.24]

    Read the article

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • Trying to connect phpMyAdmin to remote mySQL server ( 2002: can't connect )

    - by Malcolm Jones
    Trying to get phpMyAdmin to talk to a remote mySQL server. The config is below and there is already a user set up in mySQL DB to be able to log in from the specified host that PMA sits on. Hosting is provided by Rackspace (Rightscale) and both cloud servers behind the same firewall. [config.inc.php] <?php $cfg['blowfish_secret'] = ''; $i = 0; $i++; $cfg['Servers'][$i]['host'] = 'XX.XX.XX.XX'; // MySQL hostname or IP address $cfg['Servers'][$i]['port'] = ''; // MySQL port - leave blank for default port $cfg['Servers'][$i]['socket'] = ''; // Path to the socket - leave blank for default socket $cfg['Servers'][$i]['connect_type'] = 'tcp'; // How to connect to MySQL server ('tcp' or 'socket') $cfg['Servers'][$i]['extension'] = 'mysql'; // The php MySQL extension to use ('mysql' or 'mysqli') $cfg['Servers'][$i]['compress'] = FALSE; // Use compressed protocol for the MySQL connection // (requires PHP >= 4.3.0) $cfg['Servers'][$i]['controluser'] = ''; // MySQL control user settings // (this user must have read-only $cfg['Servers'][$i]['controlpass'] = ''; // access to the "mysql/user" // and "mysql/db" tables). // The controluser is also // used for all relational // features (pmadb) $cfg['Servers'][$i]['auth_type'] = 'config'; // Authentication method (config, http or cookie based)? $cfg['Servers'][$i]['user'] = 'USERNAME'; // MySQL user $cfg['Servers'][$i]['password'] = 'PASSWORD'; // MySQL password (only needed // with 'config' auth_type) $cfg['Servers'][$i]['only_db'] = ''; // If set to a db-name, only // this db is displayed in left frame // It may also be an array of db-names, where sorting order is relevant. $cfg['Servers'][$i]['hide_db'] = ''; // Database name to be hidden from listings $cfg['Servers'][$i]['verbose'] = ''; // Verbose name for this host - leave blank to show the hostname $cfg['Servers'][$i]['pmadb'] = ''; // Database used for Relation, Bookmark and PDF Features // (see scripts/create_tables.sql) // - leave blank for no support // DEFAULT: 'phpmyadmin' $cfg['Servers'][$i]['bookmarktable'] = ''; // Bookmark table // - leave blank for no bookmark support // DEFAULT: 'pma_bookmark' $cfg['Servers'][$i]['relation'] = ''; // table to describe the relation between links (see doc) // - leave blank for no relation-links support // DEFAULT: 'pma_relation' $cfg['Servers'][$i]['table_info'] = ''; // table to describe the display fields // - leave blank for no display fields support // DEFAULT: 'pma_table_info' $cfg['Servers'][$i]['table_coords'] = ''; // table to describe the tables position for the PDF schema // - leave blank for no PDF schema support // DEFAULT: 'pma_table_coords' $cfg['Servers'][$i]['pdf_pages'] = ''; // table to describe pages of relationpdf // - leave blank if you don't want to use this // DEFAULT: 'pma_pdf_pages' $cfg['Servers'][$i]['column_info'] = ''; // table to store column information // - leave blank for no column comments/mime types // DEFAULT: 'pma_column_info' $cfg['Servers'][$i]['history'] = ''; // table to store SQL history // - leave blank for no SQL query history // DEFAULT: 'pma_history' $cfg['Servers'][$i]['verbose_check'] = TRUE; // set to FALSE if you know that your pma_* tables // are up to date. This prevents compatibility // checks and thereby increases performance. $cfg['Servers'][$i]['AllowRoot'] = TRUE; // whether to allow root login $cfg['Servers'][$i]['AllowDeny']['order'] // Host authentication order, leave blank to not use = ''; $cfg['Servers'][$i]['AllowDeny']['rules'] // Host authentication rules, leave blank for defaults = array(); Please let me know if you need anymore info. -- Malcolm

    Read the article

  • Email from my new vps is marked as spam

    - by Chriswede
    I got a new vps from x10vps (x10hosting) and set up the domain via cloudflare. This is what the email looks like: Delivered-To: [email protected] Received: by 10.64.19.240 with SMTP id i16csp357708iee; Tue, 9 Oct 2012 01:29:48 -0700 (PDT) Received: by 10.50.57.130 with SMTP id i2mr908846igq.56.1349771387599; Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id v8si25630942ica.46.2012.10.09.01.29.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected] Received: from nk11p03mm-asmtp010.mac.com ([17.158.232.169]:54276) by power.SOURCEAPE.COM with esmtp (Exim 4.80) (envelope-from <[email protected]>) id 1TLVBD-0004Ig-1Y for [email protected]; Tue, 09 Oct 2012 12:28:43 +0400 I then tried to enable SPF and DKIM and got following massage In order to ensure that SPF or DKIM takes effect, you must confirm that this server is an authoritative nameserver for chvw.de. If you need help, contact your hosting provider. Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for chvw.de. [?] and the email header now looks like this: Delivered-To: [email protected] Received: by 10.50.183.227 with SMTP id ep3csp14506igc; Tue, 9 Oct 2012 01:55:23 -0700 (PDT) Received: by 10.50.40.133 with SMTP id x5mr992934igk.32.1349772923717; Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id ng8si25688859icb.42.2012.10.09.01.55.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected]; dkim=neutral (bad format) [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=chvw.de; s=default; h=Message-ID:Subject:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=iugsx3Lx0KnqjR7dj3wyQHnJ9pe/z3ntYEVk80k8rx4=; b=IrYsCtHdoPubXVOvLqxd7sLE/TyQTS5P3OrEg5SSUSKnQQcQ/fWWyBrmsrgkFSsw6jCmmRWMDR09vH5bQRpFPMA57B7pf8QRKhwXOWFBV+GnVUqICsfRjnNPvhx/lNp5; Received: from localhost ([127.0.0.1]:46539 helo=direct.chvw.de) by power.SOURCEAPE.COM with esmtpa (Exim 4.80) (envelope-from <[email protected]>) id 1TLVb0-0004dZ-Kd for [email protected]; Tue, 09 Oct 2012 12:55:22 +0400

    Read the article

  • What's going on with my server? High load, lots of idle CPU time, low disk utilization

    - by Jonathan
    I run a web site and send a legitimate opt-in, daily email newsletter to subscribers. Both the web hosting and email sending are done by the same machine. I have about 100,000 subscribers who have opted in to my daily email newsletter. My PHP script did a pretty good job sending mail to all of them until fairly recently, but as the list has grown I can't keep up. When I run top, I have very high load--usually at least 6 or 7, sometimes as high as 15--even though I only have two CPUs. However, when I run sar, my CPU is idle an average of about 30% of the time. So, it seems I'm not CPU bound. When I run iostat, it seems as though I'm not disk bound because my %util for each device is very low (no more than 5%). Given that I don't seem to be CPU bound or disk bound, why is top reporting such high load? Additionally, since I don't seem to be CPU bound or disk bound, why is my email sending script not able to keep up? Here's what I see when running top: top - 11:33:28 up 74 days, 18:49, 2 users, load average: 7.65, 8.79, 8.28 Tasks: 168 total, 5 running, 162 sleeping, 0 stopped, 1 zombie Cpu(s): 38.9%us, 58.6%sy, 0.8%ni, 0.0%id, 0.7%wa, 0.2%hi, 0.8%si, 0.0%st Mem: 3083012k total, 2144436k used, 938576k free, 281136k buffers Swap: 2048248k total, 39164k used, 2009084k free, 1470412k cached Here's what I see when running iostat -mx: avg-cpu: %user %nice %system %iowait %steal %idle 34.80 1.20 55.24 0.37 0.00 8.38 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda 0.19 71.70 1.59 29.45 0.02 0.07 5.90 0.55 17.82 1.16 3.59 sda1 0.00 0.00 0.00 0.00 0.00 0.00 7.10 0.00 13.80 13.72 0.00 sda2 0.05 50.45 1.13 24.57 0.01 0.29 24.25 0.35 13.43 1.15 2.97 sda3 0.05 10.17 0.20 2.33 0.01 0.05 43.75 0.05 20.96 2.45 0.62 sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 70.50 70.50 0.00 sda5 0.07 0.22 0.03 0.07 0.00 0.00 32.84 0.08 856.19 8.03 0.08 sda6 0.02 5.45 0.03 0.72 0.00 0.02 67.55 0.02 26.72 5.26 0.39 sda7 0.00 1.56 0.00 0.42 0.00 0.01 38.04 0.00 8.88 5.84 0.24 sda8 0.01 3.84 0.20 1.35 0.00 0.02 28.55 0.05 31.90 4.08 0.63 Here's what I see when running sar: 09:40:02 AM CPU %user %nice %system %iowait %steal %idle 09:50:01 AM all 30.59 1.01 49.80 0.23 0.00 18.37 10:00:08 AM all 31.73 0.92 51.66 0.13 0.00 15.55 10:10:06 AM all 30.43 0.99 48.94 0.26 0.00 19.38 10:20:01 AM all 29.58 1.00 47.76 0.25 0.00 21.42 10:30:01 AM all 29.37 1.02 47.30 0.18 0.00 22.13 10:40:06 AM all 32.50 1.01 52.94 0.16 0.00 13.39 10:50:01 AM all 30.49 1.00 49.59 0.15 0.00 18.77 11:00:01 AM all 29.43 0.99 47.71 0.17 0.00 21.71 11:10:07 AM all 30.26 0.93 49.48 0.83 0.00 18.50 11:20:02 AM all 29.83 0.81 48.51 1.32 0.00 19.52 11:30:06 AM all 31.18 0.88 51.33 1.15 0.00 15.47 Average: all 26.21 1.15 42.62 0.48 0.00 29.54 Here are the top handful of processes listed at the particular time I happened to run top -c: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8180 mysql 16 0 57448 19m 2948 S 26.6 0.7 4702:26 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/bristno.pid --skip-external-locking 26956 brristno 17 0 0 0 0 Z 8.0 0.0 0:00.24 [php] <defunct> 26958 brristno 17 0 94408 43m 37m R 5.0 1.4 0:00.15 /usr/bin/php /home/brristno/public_html/dbv.php 22852 nobody 16 0 9628 2900 1524 S 0.7 0.1 0:00.17 /usr/local/apache/bin/httpd -k start -DSSL 8591 brristno 34 19 96896 13m 6652 S 0.3 0.4 0:29.82 /usr/local/bin/php /home/brristno/bin/mailer.php 1qwqyb6 i0gbor 24469 nobody 16 0 9628 2880 1508 S 0.3 0.1 0:00.08 /usr/local/apache/bin/httpd -k start -DSSL 25495 nobody 15 0 9628 2876 1500 S 0.3 0.1 0:00.06 /usr/local/apache/bin/httpd -k start -DSSL 26149 nobody 15 0 9628 2864 1504 S 0.3 0.1 0:00.04 /usr/local/apache/bin/httpd -k start -DSSL

    Read the article

  • Using PHP to connect to RADIUS works on one server but not another

    - by JDS
    I have a fleet of webservers that server a LAMP webapp broken into multiple customer apps by virtualhost/domain. The platform is Ubuntu 10.04 VM + PHP 5.3 + Apache 2.2.14, on top of VMware ESX (v4 I think). This stuff's not too important, though -- I'm just setting up the background. I have one customer that connects to a RADIUS server for authentication. We've found that the app responds as if some number of web servers are configured correctly and some are not. i.e. Apparently random authentication failures or successes, with no rhyme or reason. I did a lot of analysis of our fleet, and resolved it down to the differences between two specific web servers. I'll call them "A" and "B". "A" works. "B" does not. "Works" means "connects to and gets authentication data successfully from the RADIUS server". Ultimately, I'm looking for one thing that is different, and I've exhausted everything that I can come up with, so, looking for something else. Here are things I've looked at PHP package versions (all from Ubuntu repos). These are exactly the same across servers. PECL package. There are no PECL packages that aren't installed by apt. Other libraries or packages. Nothing that was network-related or RADIUS-related was different among servers. (There were some minor package differences, though.) Network or hosting environment. I found that some of the working servers were on the same physical environment as some not-working ones (i.e. same ESX containers). So, probably, the physical network layer is not the problem. Test case. I created a test case as follows. It works on the working servers, and fails on the not-working servers, very consistently. <?php $radius = radius_auth_open(); $username = 'theusername'; $password = 'thepassword'; $hostname = '12.34.56.78'; $radius_secret = '39wmmvxghg'; if (! radius_add_server($radius,$hostname,0,$radius_secret,5,3)) { die('Radius Error 1: ' . radius_strerror($radius) . "\n"); } if (! radius_create_request($radius,RADIUS_ACCESS_REQUEST)) { die('Radius Error 2: ' . radius_strerror($radius) . "\n"); } radius_put_attr($radius,RADIUS_USER_NAME,$username); radius_put_attr($radius,RADIUS_USER_PASSWORD,$password); switch (radius_send_request($radius)) { case RADIUS_ACCESS_ACCEPT: echo 'GOOD LOGIN'; break; case RADIUS_ACCESS_REJECT: echo 'BAD LOGIN'; break; case RADIUS_ACCESS_CHALLENGE: echo 'CHALLENGE REQUESTED'; break; default: die('Radius Error 3: ' . radius_strerror($radius) . "\n"); } ?>

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • WCF wsHttpBinding "There was no channel that could accept the message with action"

    - by Steffen Schindler
    I have a webservice in IIS. I'm trying to call a function but i get an errormessage like: There was no channel that could accept the message with action 'http://Datenlotsen.Cyquest/ICyquestService/ValidateSelfAssessment' I'm hosting it in an IIS in the standard website. There I created a virtual directory named "CyQuestwebservice". For the client side config i'm using Soap UI. That Tool generates the client config from the wsdl. my webconfig looks like this, can you help me?: <system.serviceModel> <extensions> <behaviorExtensions> <add name="wsdlExtensions" type="WCFExtras.Wsdl.WsdlExtensionsConfig, WCFExtras, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <services> <service behaviorConfiguration="CyquestWebService.Service1Behavior" name="CyquestWebService.CyquestService"> <endpoint address="" behaviorConfiguration="EndPointBehavior" binding="wsHttpBinding" bindingNamespace="http://Datenlotsen.Cyquest" contract="CyquestWebService.ICyquestService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" bindingNamespace="http://Datenlotsen.Cyquest" contract="IMetadataExchange" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="EndPointBehavior" > <wsdlExtensions location="http://wssdev04.datenlotsen.intern/Cyquestwebservice/CyquestService.svc" singleFile="True"/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="CyquestWebService.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData= "c:\log\Traces.svclog" /> </listeners> </source> </sources> </system.diagnostics> </configuration>

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >