Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 553/594 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • PHP suddenly failed after IIS update

    - by James Hay
    All my application pools were stopped this morning after I got to work. I can restart them, but when I try to load the website the app pool crashes again. Update: I've looked in the GAC as the error below suggests and it seems that the file is not there. How do I get it back? Update 2: I found a further error in the event log saying The Module name FastCgiModule path C:\WINDOWS\System32\inetsrv\iisfcgi.dll returned an error from registration. The data is the error. So following the information from here http://forums.iis.net/t/1153937.aspx I removed CGI and my sites are working again. This has fixed the initial problem, but now I don't have FastCGI so I'm fairly sure that PHP will no longer be working (I don't have any PHP at the moment to test). Original Post I'm getting this error in the event viewer: IISMANAGER_ERROR_LOADING_PROVIDER_TYPE IIS Manager could not load type 'Web.Management.PHP.PHPProvider, Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' for module provider 'PHP' that is declared in %windir%\system32\inetsrv\config\administration.config. Verify that the type is correct, and that the assembly that contains the module provider is in the Global Assembly Cache (GAC). Exception:System.IO.FileNotFoundException: Could not load file or assembly 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' or one of its dependencies. The system cannot find the file specified. File name: 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at Microsoft.Web.Management.Server.AdministrationModuleProvider.GetModuleProvider(String userName, String connectionName) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Process:InetMgr Connection:CT211511\Administrator Everything was working fine last night when I left work, and since they've done the maintenance it's all broken.

    Read the article

  • How to set up Mod_WSGI for Python on Ubuntu

    - by AutomatedTester
    Hi, I am trying to setup MOD_WSGI on my Ubuntu box. I have found steps that said I needed to do the following steps I found at http://ubuntuforums.org/showthread.php?t=833766 sudo apt-get install libapache2-mod-wsgi sudo a2enmod mod-wsgi sudo /etc/init.d/apache2 restart sudo gedit /etc/apache2/sites-available/default and update the Directory <Directory /var/www/> Options Indexes FollowSymLinks MultiViews ExecCGI AddHandler cgi-script .cgi AddHandler wsgi-script .wsgi AllowOverride None Order allow,deny allow from all </Directory> sudo /etc/init.d/apache2 restart Created test.wsgi with def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Step 2 fails because it says it can't find mod-wsgi even though the apt-get found it. If I carry on with the steps the python app just shows as plain text in a browser. Any ideas what I have done wrong? EDIT: Results for questions asked automatedtester@ubuntu:~$ dpkg -l libapache2-mod-wsgi Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-======================================-======================================-============================================================================================ ii libapache2-mod-wsgi 2.5-1 Python WSGI adapter module for Apache automatedtester@ubuntu:~$ dpkg -s libapache2-mod-wsgi Package: libapache2-mod-wsgi Status: install ok installed Priority: optional Section: python Installed-Size: 376 Maintainer: Ubuntu MOTU Developers <[email protected]> Architecture: i386 Source: mod-wsgi Version: 2.5-1 Depends: apache2, apache2.2-common, libc6 (>= 2.4), libpython2.6 (>= 2.6), python (>= 2.5), python (<< 2.7) Suggests: apache2-mpm-worker | apache2-mpm-event Conffiles: /etc/apache2/mods-available/wsgi.load 06d2b4d2c95b28720f324bd650b7cbd6 /etc/apache2/mods-available/wsgi.conf 408487581dfe024e8475d2fbf993a15c Description: Python WSGI adapter module for Apache The mod_wsgi adapter is an Apache module that provides a WSGI (Web Server Gateway Interface, a standard interface between web server software and web applications written in Python) compliant interface for hosting Python based web applications within Apache. The adapter provides significantly better performance than using existing WSGI adapters for mod_python or CGI. Original-Maintainer: Debian Python Modules Team <[email protected]> Homepage: http://www.modwsgi.org/ automatedtester@ubuntu:~$ sudo a2enmod libapache2-mod-wsgi ERROR: Module libapache2-mod-wsgi does not exist! automatedtester@ubuntu:~$ sudo a2enmod mod-wsgi ERROR: Module mod-wsgi does not exist! FURTHER EDIT FOR RMYates automatedtester@ubuntu:~$ apache2ctl -t -D DUMP_MODULES apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_worker_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgid_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) python_module (shared) setenvif_module (shared) status_module (shared) Syntax OK automatedtester@ubuntu:~$

    Read the article

  • Hyper-V + RRAS NAT + Port Forwarding + RDP, can I get it all working together?

    - by Tom Bull
    I am running a Windows 2008 R2 server with various services running natively and two virtualised servers running on Hyper-V. The hardware server, I'm going to call it REAL1, has one external NIC, to which I can assign any of the following IP addresses: 1.2.3.4, 1.2.3.5, 1.2.3.6, etc... I need to achieve the following: I would like to be able to connect to REAL1 via remote desktop (RDP / port 3389) on one IP address (say 1.2.3.4), but also to the virtualised servers (I'm going to call them VIRTUAL1 and VIRTUAL2) on the other available IP addresses (say 1.2.3.5 and 1.2.3.6). The easiest way of doing this is to connect the virtual servers directly to the external interface and assign them each their own IP address. REAL1 will have 1.2.3.4, VIRTUAL1 will have 1.2.3.5 and VIRTUAL2 will have 1.2.3.6. Unfortunately, although I don't directly manage the two virtual servers, I have responsibility for their security. I would like to have some kind of firewall between the virtual servers an the internet. I have tried running a virtual machine firewall, but have found the performance on Hyper-V pretty terrible. The alternative I am now trying is Routing and Remote Access (RRAS): I have set up a virtual network called 'Internal' and REAL1 has a virtual network adapter connected to this virtual network I have connected each of the virtual servers to this network too I have assigned each server static IP addresses on this virtual network (REAL1 has 10.1.1.1, VIRTUAL1 has 10.1.1.2 and VIRTUAL2 has 10.1.1.3) I have installed RRAS and set up a NAT. The external interface is the external NIC, the internal interface is the virtual NIC connected to the internal network I have assigned all the available external IP addresses to the external NIC on REAL1. The virtual servers have been set up appropriately such that their default gateway is pointing to 10.1.1.1 and they can both access externally. Success! The RRAS is routing packets. The problem I have is that when I try to port forward services from the external IP address on REAL1, it only works if there is not already a service bound to the port. Remote desktop 'greedily' binds to every available IP address on port 3389 on REAL1 so I can't selectively forward incoming traffic for 1.2.3.5:3389 to 10.1.1.2:3389. RRAS will allow me to set up this port forwarding, and no errors come up. It just doesn't work. So the question I have is: Is there a better way of doing this? Or at least is there a way of resolving the apparant conflict between RRAS and everything else on the physical server?

    Read the article

  • Installing gnome on Linode with Ubuntu 9.10 x64 - remote VNC/RDP

    - by Kieran Benton
    Hi, I'm a self confessed Linux newbie, having lived and worked mostly within the Windows world for most of my life. I'm making the effort to try moving my virtual host from a Windows box to a Linode instance to try and better learn Linux, and one of the uses I occasionally have with my current Windows VPS is to RDP into it and browse the internet. I'm aware that this is probably not best practice (from either performance or security), and most of the time I will be learning from the shell, but I do occasionally need to boot into a GUI. Because of this, I'd like the ability within my Ubuntu installation on Linode to start/stop Windows X and Gnome at will after SSHing in (startx? gdm?), so I've tried: apt-get install ubuntu-desktop Reboot startx But I've got an error that no amount of googling has helped me with so far, which I'm assuming is something to do with the fact the box is headless and X needs some more configuration that is beyond me at the moment: root@local:~# startx hostname: Unknown host xauth: creating new authority file /root/.Xauthority xauth: creating new authority file /root/.Xauthority xauth: (argv):1: bad display name "local.kieranbenton.com:0" in "list" command xauth: (stdin):1: bad display name "local.kieranbenton.com:0" in "add" command X.Org X Server 1.6.4 Release Date: 2009-9-27 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-23-server x86_64 Ubuntu Current Operating System: Linux local.kieranbenton.com 2.6.31.5-x86_64-linode9 #1 SMP Mon Oct 26 19:35:25 UTC 2009 x86_64 Kernel command line: root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ro Build Date: 26 October 2009 05:19:56PM xorg-server 2:1.6.4-2ubuntu4 (buildd@) Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Dec 2 15:50:23 2009 Primary device is not PCI (==) Using default built-in configuration (21 lines) (EE) open /dev/fb0: No such file or directory (EE) No devices detected. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Can anyone give me any pointers as to how to go from here and get VNC/RDP setup? (RDP would be preferred?). Thanks.

    Read the article

  • Installing gnome on Linode with Ubuntu 9.10 x64 - remote VNC/RDP

    - by Kieran Benton
    Hi, I'm a self confessed Linux newbie, having lived and worked mostly within the Windows world for most of my life. I'm making the effort to try moving my virtual host from a Windows box to a Linode instance to try and better learn Linux, and one of the uses I occasionally have with my current Windows VPS is to RDP into it and browse the internet. I'm aware that this is probably not best practice (from either performance or security), and most of the time I will be learning from the shell, but I do occasionally need to boot into a GUI. Because of this, I'd like the ability within my Ubuntu installation on Linode to start/stop Windows X and Gnome at will after SSHing in (startx? gdm?), so I've tried: apt-get install ubuntu-desktop Reboot startx But I've got an error that no amount of googling has helped me with so far, which I'm assuming is something to do with the fact the box is headless and X needs some more configuration that is beyond me at the moment: root@local:~# startx hostname: Unknown host xauth: creating new authority file /root/.Xauthority xauth: creating new authority file /root/.Xauthority xauth: (argv):1: bad display name "local.kieranbenton.com:0" in "list" command xauth: (stdin):1: bad display name "local.kieranbenton.com:0" in "add" command X.Org X Server 1.6.4 Release Date: 2009-9-27 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-23-server x86_64 Ubuntu Current Operating System: Linux local.kieranbenton.com 2.6.31.5-x86_64-linode9 #1 SMP Mon Oct 26 19:35:25 UTC 2009 x86_64 Kernel command line: root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ro Build Date: 26 October 2009 05:19:56PM xorg-server 2:1.6.4-2ubuntu4 (buildd@) Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Dec 2 15:50:23 2009 Primary device is not PCI (==) Using default built-in configuration (21 lines) (EE) open /dev/fb0: No such file or directory (EE) No devices detected. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Can anyone give me any pointers as to how to go from here and get VNC/RDP setup? (RDP would be preferred?). Thanks.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • hyperv vss writer unexpected error

    - by Eric Martin
    I am using Mozy Pro to backup our Hyperv servers. I am doing this without any issues on a 2nd server but this box hasn't backed up sucessfully yet. I was told by the support tech at Mozy to type: vssadmin list providers >c:\providers.txt vssadmin list writers >c:\writers.txt Writers.txt: vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2013 Microsoft Corp. Writer name: 'Task Scheduler Writer' Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124} Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b} State: [1] Stable Last error: No error Writer name: 'VSS Metadata Store Writer' Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06} Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93} State: [1] Stable Last error: No error Writer name: 'Performance Counters Writer' Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2} Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381} State: [1] Stable Last error: No error Writer name: 'System Writer' Writer Id: {e8132975-6f93-4464-a53e-1050253ae220} Writer Instance Id: {506e7d9c-ded3-4edf-824a-4dd9af7f7538} State: [1] Stable Last error: No error Writer name: 'ASR Writer' Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4} Writer Instance Id: {1de438e4-09de-487c-9ea8-eeafbe3fd210} State: [1] Stable Last error: No error Writer name: 'COM+ REGDB Writer' Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f} Writer Instance Id: {511d23d9-4cbb-400f-b739-e6e0a8ecdbee} State: [1] Stable Last error: No error Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {32f41185-2b20-41ff-a7aa-92c262f578cd} State: [1] Stable Last error: Unexpected error Writer name: 'Registry Writer' Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485} Writer Instance Id: {fa328ece-623f-43cc-9888-e897e108c40e} State: [1] Stable Last error: No error Writer name: 'Shadow Copy Optimization Writer' Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f} Writer Instance Id: {7b582861-7f7f-4c10-adb1-5106bcab3902} State: [1] Stable Last error: No error Writer name: 'WMI Writer' Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0} Writer Instance Id: {d2f73a0f-c19a-44cc-bcd2-6c84ac6e516b} State: [1] Stable Last error: No error Writer name: 'MSMQ Writer (MSMQ)' Writer Id: {7e47b561-971a-46e6-96b9-696eeaa53b2a} Writer Instance Id: {95ea6efc-c00c-47ca-90d1-28fbe6d7a8d0} State: [1] Stable Last error: No error Writer name: 'IIS Config Writer' Writer Id: {2a40fd15-dfca-4aa8-a654-1f8c654603f6} Writer Instance Id: {d5a32f43-0675-400d-8502-cdece4c867e1} State: [1] Stable Last error: No error Providers.txt: vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2013 Microsoft Corp. Provider name: 'Microsoft File Share Shadow Copy provider' Provider type: Fileshare Provider Id: {89300202-3cec-4981-9171-19f59559e0f2} Version: 1.0.0.1 Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7 The tech said I needed to resolve this issue: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {32f41185-2b20-41ff-a7aa-92c262f578cd} State: [1] Stable Last error: Unexpected error I checked the event viewer and this is the only thing I found related to hyperv: I don't know where to start to resolve this or to find out where the issue is at. I know nothing of the vss writer for hyperv so any input would be greatly appreciated.

    Read the article

  • Can't ssh tunnel to access a remote mysql server

    - by hobbes3
    I can't seem to figure out why I can't use ssh tunnel to connect to my remote MySQL server. I do ssh tunnel with [hobbes3@hobbes3] ~ $ ssh linode -L 3307:localhost:3306 Then on another terminal, I try [hobbes3@hobbes3] ~ $ mysql -h localhost -P 3307 -u root --protocol=tcp -p Enter password: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2 On the server, it shows this: root@li534-120 ~ # channel 4: open failed: connect failed: Connection refused Here is my my.cnf on the server: [mysqld] # Settings user and group are ignored when systemd is used (fedora >= 15). # If you need to run mysqld under different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd user=mysql datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Semisynchronous Replication # http://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html # uncomment next line on MASTER ;plugin-load=rpl_semi_sync_master=semisync_master.so # uncomment next line on SLAVE ;plugin-load=rpl_semi_sync_slave=semisync_slave.so # Others options for Semisynchronous Replication ;rpl_semi_sync_master_enabled=1 ;rpl_semi_sync_master_timeout=10 ;rpl_semi_sync_slave_enabled=1 # http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html ;performance_schema [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqld] port = 3306 socket=/var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 64M max_allowed_packet = 128M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache = 8 max_connections = 25 query_cache_size = 16M table_open_cache = 1024 table_definition_cache = 1024 tmp_table_size = 32M max_heap_table_size = 32M bind-address = 0.0.0.0 Now sure if this helps but here is the MySQL user list: mysql> select * from mysql.user; +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | localhost | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | 127.0.0.1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | ::1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ 3 rows in set (0.00 sec) I read about how MySQL treats localhost vs 127.0.0.1 as connecting via a socket or TCP, respectively. But I'm starting to get confused on what's really going on or if socket vs TCP is even the issue. Thanks in advance and I'm open for any tips and suggestions! Some more info: My MySQL client, running OS X 10.8.4, is mysql Ver 14.14 Distrib 5.6.10, for osx10.8 (x86_64) using EditLine wrapper My MySQL server, running on CentOS 6.4 32-bit, is mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+--------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------+ | innodb_version | 1.1.8 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.28 | | version_comment | MySQL Community Server (GPL) by Remi | | version_compile_machine | i686 | | version_compile_os | Linux | +-------------------------+--------------------------------------+ 7 rows in set (0.00 sec)

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • Mac OS X Server Open Directory does not push Software Update settings to clients

    - by joxl
    I have an Xserve G5 running Mac OS X Server 10.5.8 configured as an Open Directory master. I have also enabled and configured Software Update service on the machine. The SUS is configured to serve Tiger, Leopard and Snow Leopard clients (see http://discussions.apple.com/message.jspa?messageID=10297359#10297359) The clients bound to the OD are a variety of Mac's running OS X 10.4, 10.5 or 10.6. In Workgroup Manager, I have created 3 machine groups for each client OS. Each group is configured with a custom SUS URL, and the managed client computers are members accordingly (see http://discussions.apple.com/thread.jspa?messageID=10493154#10493154) My problem is that the server pushes the SUS settings to some of the client machines, but not all. When I first configured all this stuff on the server (a few weeks ago) I was closely monitoring a few of the client machines to confirm that they received the custom settings. I noticed that some of the clients (10.4/5/6 alike) seemed to get the settings immediately, others didn't show the new settings until after a reboot. As I said, results are mixed across OS's, but some clients will not "sync" at all. My immediate thought was to unbind/rebind the problematic machines. I did this on several client computers with no success. For example, today I was working on one of the Tiger clients. I noticed it was not pointed at my local SUS, so I checked the OD binding; it was fine. Just to be sure I unbound the machine. Next, I checked WM and confirmed the computer record was gone. I noticed the machine group still had a residual (broken?) member from the unbound client; I manually removed this. Finally, I re-bound the client to OD and re-added the machine to it's correct group in WM. Unfortunately, the client still pings apple's SUS for updates. Just to play it safe I rebooted the client, but to no avail, it will not see my local SUS. To confirm that there is nothing wrong with the server, or the client's connection to it, forcefully pointed the machine at my SUS: sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate CatalogURL "$LOCAL_SUS_URL" and the machine successfully updated off my local server. Great, successful updates, but problem not solved. I've done exhaustive reading on discussions.apple.com (not saying I read everything, I'm just saying I have read a lot) without a good answer. The discouraging thing is that a lot of OD problems I've read about only result in the sysadmin completely reinstalling the server, or OD, or some other similarly heavy-handed operation. At this point, I am not willing to go that route. I still have hope that I can find the reason for this flaky behavior. If anyone can point me in a helpful direction it would be much appreciated. EDIT: Indeed, some files are being pushed to the client: # from client machine: $ sudo find /Library -type f -name com.apple.SoftwareUpdate.plist /Library/Managed Preferences/com.apple.SoftwareUpdate.plist /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist /Library/Preferences/com.apple.SoftwareUpdate.plist A few weeks ago, prior to my (previously mentioned) modifications, the SUS was still running "stock". Which meant it could not serve SL (10.6) machines. At that time, the Software Update settings were setup in WM under User Groups. This didn't make any sense because some users work on multiple machines with different OS's. Before creating Machine Groups in WM, I deleted all the SU settings from the User Group Preferences. This just makes the whole thing more confusing, because when I see a file here: /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist I assume it's still remaining from the "old" settings, because I wouldn't think a Machine Setting belongs there. Despite all the com.apple.SoftwareUpdate.plist hanging around under the Managed Preferences, why does the client machine still call home to Apple and not my SUS? # on client machine: $ date Tue Jan 25 17:01:46 EST 2011 $ softwareupdate --list Software Update Tool Copyright 2002-2005 Apple No new software available. switch terminals... # on server: $ tail -n1 /var/log/swupd/swupd_access_log 10.x.x.x - - [25/Jan/2011:15:54:29 -0500] XXXX POST "/cgi-bin/SoftwareUpdateServerStats" 200 13 ... Notice the date of the client softwareupdate and the latest access to the SUS server; the server never heard a peep from that client.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Using an SSD with no AHCI [ICH7 base] - Windows 7 hangs frequently

    - by h4xnoodle
    I have a Shuttle Intel G31 + ICH7 (base -- not M/R etc) system. I just bought an OCZ Vertex 3 120gb [VTX3-25SAT3-120G] which includes the Sandforce 2218 firmware. The ICH7 does not support AHCI. I understand that this can be a problem. What I don't understand, is if it's necessary to have the proper performance of this drive. I know that without AHCI I may get a limited read/write speed -- this is fine. What my concern is, is the constant freezing/hangs I'm getting with Windows 7 on any disk activity. The 'Highest Active Time' flip-flops from 0 to 100% every minute or so regardless of large or small files. EDIT: The threads/processes with the highest response time is the kernel. I've been reading about other people with Shuttle SG31G2s, and they seem to be using SSDs no problem. Is this the controller's fault? The fact that I do not have AHCI enabled? It makes sense to me that if this SSD requires AHCI features that it would cause Windows to hang, but I would like to fully determine my situation before returning things/reformatting. To initially have my drive recognise the SSD at all, I had to change the BIOS option to Force Gen II instead of Auto for the SATA controller. I then installed Windows with no problem. There were no errors in the event log related to disk usage, but watching the perfmon I could see the highest active time and the processes (usually pagefile.sys being written to, or chrome/firefox caching) which was correlated to the hanging. So now what I need answered is: should I be returning this SSD and getting one with a different controller, or returning the SSD all-together as it will never work out and I will continue to get these hangs. Posts I've read: Windows 7 New SSD SATA AHCI? -- suggests to use AHCI http://forums.anandtech.com/showthread.php?t=2189868 -- Sandforce issues Windows 7 freezes with SSD -- and attached posts Why does my Windows 7 PC / SSD drive keep freezing? -- this is not the controller I have, but still a related issue. Windows 7 hangs after longer inactivity of user -- also tried messing with power settings with no luck. It was already set to 'Never' for turning off HDDs.

    Read the article

  • apache mod_cache in v2.2 - enable cache based on url

    - by Janning
    We are using apache2.2 as a front-end server with application servers as reverse proxies behind apache. We are using mod_cache for some images and enabled it like this: <IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache CacheIgnoreCacheControl On CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The image urls vary completely and have no common start pattern, but they all end in ".png". Thats why we used the root in CacheEnable / If not served from the cache, the request is forwarded to an application server via reverse proxy. So far so good, cache is working fine. But I really only need to cache all image request ending in ".png". My above configuration still works as my application server send an appropriate Cache-Control: no-cache header on the way back to apache. So most pages send a no-cache header back and they get not cached at all. My ".png" responses doesn't send a Cache-Control header so apache is only going to cache all urls with ".png". Fine. But when a new request enters apache, apache does not know that only .png requests should be considered, so every request is checking a file on disk (recorded with strace -e trace=file -p pid): [pid 19063] open("/var/cache/apache2/mod_disk_cache/zK/q8/Kd/g6OIv@woJRC_ba_A.header", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) I don't want to have apache going to disk every request, as the majority of requests are not cached at all. And we have up to 10.000 request/s at peak time. Sometimes our read IO wait spikes. It is not getting really slow, but we try to tweak it for better performance. In apache 2.4 you can say: <LocationMatch .png$> CacheEnable disk </LocationMatch> This is not possible in 2.2 and as I see no backports for debian I am not going to upgrade. So I tried to tweak apache2.2 to follow my rules: <IfModule mod_disk_cache.c> SetEnvIf Request_URI "\.png$" image RequestHeader unset Cache-Control RequestHeader append Cache-Control no-cache env=!image CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache #CacheIgnoreCacheControl on CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The idea is to let apache decide to serve request from cache based on Cache-control header (CacheIgnoreCacheControl default to off). And before simply set a RequestHeader based on the request. If it is not an image request, set a Cache-control header, so it should bypass the cache at all. This does not work, I guess because of late processing of RequestHeader directive, see https://httpd.apache.org/docs/2.2/mod/mod_headers.html#early I can't add early processing as "early" keyword can't be used together with a conditional "env=!image" I can't change the url requesting the images and I know there are of course other solutions. But I am only interested in configuring apache2.2 to reach my goal. Does anybody has an idea how to achieve my goal?

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • How to Block a HTTP Website along with Its All Subdomain using IPTABLE

    - by netnovice
    I run a small HTTP web proxy site . We can not modify anything there in Proxy program. Few users mainly use Yahoo Web mail for Spamming and We need to block yahoo web mail access only ( complete yahoo website is also Ok) through our proxy . specially .mail.yahoo.com.. Like - we need to block URL like - http://uk-mg61.mail.yahoo.com http://in-mg61.mail.yahoo.com etc. etc. Note : We generaly open http://mail.yahoo.com in browser - but after loggin in it forwards it to Urls like above but all those are subdomain of mail.yahoo.com My target is if we can get all IP list for all available subdomain of mail.yahoo.com I can block it totally . We can only use IPTABLE ...I know using proxy itself we can check HTTP header and check Host field for .mail.yahoo.com. and block it. Solution : Follwoign what I did using IPtable . I collected IP CIDR block for yahoo mainly for yahoo web mail ( mail.yahoo.com ) as much as possible ( using linux host and whois command ) [ like 66.163.160.0/19 nd 98.136.0.0/14 etc ] and applied follwing command Like iptables -A OUTPUT -p tcp -d 66.163.160.0/19 -m state --state NEW -j DROP etc. Things are working fine. user can not access yahoo mail BUT the problem is I need to be updated with the avaialble CIDR YAHOO IP list ... I am ready to do it every week. I collected many from Net... You know theer are countles subdomain of mail.yahoo.com and seems every week Yahoo adding new IP... But what I observed some time user can bypass our rule and the reason obvously all the avaialble Ips are not entered in IPtable yet. What we need to do is enter all Ips of mail.yahoo.co But where do I find all subdomain for mail.yahoo.com I know we can get it from DNS but I must not be allowed to make DNS axfr query. Also doing reverse DNS will have performance issue. I want to know all subdomain of .mail.yahoo.c Can I get it from yahoo site. I have the list of all YAHOO smtp IP....but I need webmail Ip... ( http://public.yahoo.com/carloc/ymail.html ) Can you please share your Idea. Thank you

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • Adaptec 5805 after reboot don't starting

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not included. It only works if the computer is shut down and turn off. Late i update firmware "Adaptec RAID 5805 Firmware Build 18948" How to fix the problem? add Log Configuration summary Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priorityHigh Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1 List item

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Server load spikes several times a day, load average for the past month is 5 times the load average all year

    - by AMF
    My Munin notifications set up for our (Debian) LAMP cluster have been notifying me continuously that our load on our production machine has been at dangerous levels. While the average load all year typically runs between 2 and 8, the load in the past month and only the past month -- has been skyrocketing to 10, 18, and occasionally even 50-60. The spikes last only 5-10 minutes at a time and occur about every 2-3 hours. The spikes do not effect performance only because I have a script that sends traffic off our server to a mirror CDN when the load goes above 10. I've looked for cron jobs that correlate with this timeframe but there is nothing I can see that would cause this. Site traffic is also normal (we receive about 200K visits per day). I'm also trying to think of anything I've changed around the time this problem began, and I really cannot think of anything. This is probably not much to go on. Maybe there is a clue in the top print-out (below) that I'm not seeing. How do I proceed to find the cause? -- Typical top when the load is NOT spiking: top - 11:13:09 up 472 days, 25 min, 1 user, load average: 6.08, 4.29, 3.80 Tasks: 105 total, 1 running, 104 sleeping, 0 stopped, 0 zombie Cpu(s): 41.2%us, 5.8%sy, 0.0%ni, 49.5%id, 2.7%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 3369592k total, 2166980k used, 1202612k free, 559504k buffers Swap: 2650684k total, 1892k used, 2648792k free, 1129116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32046 apache 15 0 36300 12m 9828 S 20 0.4 0:01.97 apache2 32679 apache 15 0 36568 13m 10m S 19 0.4 0:01.69 apache2 31441 apache 15 0 36616 13m 10m S 19 0.4 0:04.13 apache2 31477 apache 15 0 36596 13m 9.8m S 15 0.4 0:01.99 apache2 31993 apache 15 0 36876 16m 12m S 12 0.5 0:02.01 apache2 31782 apache 15 0 36836 14m 10m S 8 0.4 0:02.17 apache2 32198 apache 15 0 36536 13m 10m S 7 0.4 0:01.59 apache2 880 apache 15 0 36508 9708 6236 S 7 0.3 0:00.42 apache2 31945 apache 17 0 36876 16m 13m S 5 0.5 0:03.17 apache2 32197 apache 16 0 36636 10m 7504 S 5 0.3 0:02.70 apache2 32326 apache 15 0 37024 11m 7632 S 5 0.3 0:02.15 apache2 32565 apache 15 0 37280 13m 9.8m S 5 0.4 0:03.75 apache2 32676 apache 15 0 36896 16m 12m S 4 0.5 0:00.95 apache2 32678 apache 15 0 36536 12m 9692 S 4 0.4 0:02.27 apache2 974 apache 16 0 37064 9888 6016 D 4 0.3 0:00.13 apache2 32150 apache 16 0 36832 13m 10m S 3 0.4 0:01.74 apache2 31780 apache 16 0 36848 11m 7660 S 3 0.3 0:02.87 apache2

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Connecting a 2560x1440 display to a laptop?

    - by tjollans
    Having read Jeff Atwood's blog post on Korean 27" IPS LCDs, I've been wondering to what extent these are useful in a notebook + large display situation. I own a Lenovo Thinkpad Edge E320 with 2nd gen. integrated Intel graphics. According to the spec from Intel, this should support HDMI version 1.4, and, using DisplayPort, resolutions up to 2560x1600. HDMI version 1.4 supports resolutions up to 4096×2160, however, according to c't (German), the HDMI interface used with Intel chips only supports 1920x1200. The same goes for the DVI output - dual-link DVI-D, apparently, is not supported by Intel. It would appear that my laptop cannot digitally drive this kind of resolution. Now what about other laptops? According to the article in c't above, AMD's integrated graphics chips have the same limitation as Intel's. NVIDIA graphics cards, apparently, only offer resolutions up to 1900x1200 over HDMI out of the box, but it's possible, when using Linux at least, to trick the driver into enabling higher resolutions. Is this still true? What's the situation on Windows and OSX? I found no information on whether discrete AMD chips support ultra-high resolutions over HDMI. Owners of laptops with (Mini) DisplayPort / Thunderbolt won't have any issues with displays this large, but if you're planning to go for a display with dual-link DVI-D input only (like the Korean ones), you're going to need an adapter, which will set you back something like €70-€100 (since the protocols are incompatible). The big question mark in this equation is VGA: a lot of laptops have it, and I don't see any reason to think this resolution is not supported by the hardware (an oft-quoted figure appears to be 2048x1536@75Hz, so 2560x1440@60Hz should be possible, right?), but are the drivers likely to cause problems? Perhaps more critically, you'd need a VGA to dual-link DVI-D adapter that converts analog to digital signals. Do these exist? How good are they? How expensive are they? Is there a performance penalty involved? Please correct me if I'm wrong on any points. In summary, what are the requirements on a laptop to drive an external LCD at 2560x1440, in particular one that supports dual-link DVI-D only, and what tools and adapters can be used to lower the bar?

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >