Search Results

Search found 18677 results on 748 pages for 'current'.

Page 519/748 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Apache httpd Problem

    - by Christopher
    Hey, I am getting intermittent issues with my site. Pages often hang with huge loading times and sometimes fail to load. The httpd error logs contain the following: [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5871 for worker proxy:reverse [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5871 for (*) [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5872 for worker proxy:reverse [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5872 for (*) [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5954 for worker proxy:reverse [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized The server is currently running with 800mb free memory, so it is not caused by lack of RAM. The current number of httpd procceses is 11. This does increase as the error persists and can rise up to 25+. Also, I am running Apache/2.2.3 (CentOS). Any suggestions would be greatly appreciated. Many thanks, Chris.

    Read the article

  • Windows Terminal Server: occasional memory violation for applications

    - by syneticon-dj
    On a virtualized (ESXi 4.1) Windows Server 2008 SP2 32-bit machine which is used as a terminal server, I occasionally (approximately 1-3 event log entries a day) see applications fail with an 0xc0000005 error - apparently a memory access violation. The problem seems quite random and only badly reproducable - applications may run for hours, fail with 0xc0000005 and restart quite fine or just throw the access violation at startup and start flawlessly at the second attempt. The names of executables, modules and offset addresses vary, although a single executable tends to fail with same modules and the same memory offset addresses (like "OUTLOOK.EXE" repeatedly failing on module "olmapi32.dll" with the offset "0x00044b7a") - even across multiple user's logons and with several days passing without a single failure inbetween. The offset addresses seem to change across reboots, however. Only selective executables seem affected by the problem, although I may simply not be seeing a sufficient number of application runs from the other ones. I first suspected a possible problem with the physical machine's RAM, but ruled this out as a rather unlikely cause - the memory comes with ECC and I've already moved the virtual machine across several times, without any perceptable change. I've seen that DEP was enabled in "OptOut" mode on this machine: C:\Users\administrator>wmic OS Get DataExecutionPrevention_SupportPolicy DataExecutionPrevention_SupportPolicy 3 and tried changing the policy to OptIn via startup options: bcdedit.exe /set {current} nx OptIn but have yet to see any effect - I also would expect Outlook 12 or Adobe Reader 9 (both affected applications) to play well with DEP. Any other ideas why the apps may be failing?

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • How to upgrade Apache 2 from 2.2 to 2.4

    - by Nina
    I was in the process of doing a test upgrade from Apache 2.2 to 2.4.3. I'm using Ubuntu 10.04. I would have upgraded to 12.04 for this to see if the upgrade would go a lot smoother. Unfortunately, I was told it wasn't an option...so I'm stuck using 10.04. The process I did this was: Before attempting this, I have managed to upgrade APR from 1.3 to 1.4 as well since apache told me it was a requirement beforehand: http://apr.apache.org/download.cgi First remove all traces of the current apache: sudo apt-get --purge remove apache2 sudo apt-get remove apache2-common apache2-utils apache2.2-bin apache2-common sudo apt-get autoremove whereis apache2 sudo rm -Rf /etc/apache2 /usr/lib/apache2 /usr/include/apache2 Afterwards, I did the following: sudo apt-get install build-essential sudo apt-get build-dep apache2 Then install apache 2.4 with the following: wget http://apache.mirrors.tds.net//httpd/httpd-2.4.3.tar.gz tar -xzvf httpd-2.4.3.tar.gz && cd httpd-2.4.3 sudo ./configure --prefix=/usr/local/apache2 --with-apr=/usr/local/apr --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install After the make install, I ended up getting a series of errors that prevented it from installing correctly: exports.c:2513: error: redefinition of 'ap_hack_apr_uid_current' exports.c:1838: note: previous definition of 'ap_hack_apr_uid_current' was here exports.c:2514: error: redefinition of 'ap_hack_apr_uid_name_get' exports.c:1839: note: previous definition of 'ap_hack_apr_uid_name_get' was here exports.c:2515: error: redefinition of 'ap_hack_apr_uid_get' exports.c:1840: note: previous definition of 'ap_hack_apr_uid_get' was here exports.c:2516: error: redefinition of 'ap_hack_apr_uid_homepath_get' Looking for exports.c only leads me back to the httpd-2.4.3 folder. So I'm not sure what these errors mean... Thanks in advance for any help you have to offer!

    Read the article

  • MySQL table does not exist

    - by Phanindra
    I am getting following error in err file. 110803 6:51:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` already exists in InnoDB internal InnoDB: data dictionary. Have you deleted the .frm file InnoDB: and not used DROP TABLE? Have you used DROP DATABASE InnoDB: for InnoDB tables in MySQL version <= 3.23.43? InnoDB: See the Restrictions section of the InnoDB manual. InnoDB: You can drop the orphaned table inside InnoDB by InnoDB: creating an InnoDB table with the same name in another InnoDB: database and copying the .frm file to the current database. InnoDB: Then MySQL thinks the table exists, and DROP TABLE will InnoDB: succeed. InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html And when I do the same, like copying the frm file from other database to here and drop the table, i am getting following error, InnoDB: Error: trying to load index PRIMARY for table ims/temp_discoveryjobdetails InnoDB: but the index tree has been freed! 110803 6:50:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` does not exist in the InnoDB internal InnoDB: data dictionary though MySQL is trying to drop it. InnoDB: Have you copied the .frm file of the table to the InnoDB: MySQL database directory from another database? InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html Please any one help me out of this. Also can any one tell me why this error is coming. EDIT: The issue is occurring only when disk size is full and when we use Truncate table. Also this is occurring only in 5.1 version but not in 5.0 version.

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • PHP 5.3 on IIS gives 404 error in CGI mode

    - by reinier
    Slowly losing my mind here. I had PHP 5.2 working fine (ISAPI) under IIS, but for some extension I needed 5.3. So no worries, I installed this but it turns out ISAPI is not supplied anymore. I followed the install tutorials for fastcgi and ended up with a 500 internal server error for every PHP page served. So my current situation is: I have fastcgi removed. In my websites I have added PHP (head, get, post) and routed them to c:\php\php-cgi.exe. Result: every PHP page I try (even the ones with just text) gives 404 not found error. Any HTML file I put in the same folder, serves without a hitch. Who can help me please... How hard can something like this be right? For me apparently very hard. Extra information: ran the installer as suggested below. Set it to use fastcgi. my fcgiext.ini file looks like this now: [types] php=c:\php\php-cgi.exe [c:\php\php-cgi.exe] exepath=c:\php\php-cgi.exe from the command-line a 3 line PHP file with just phpinfo(); works fine from the server the same PHP file with just phpinfo(); results in the internal server 500 error. from the server a PHP file with just text works fine when changing the document types in IIS management console and point the PHP extension directly to c:\php\php-cgi.exe results in 404 for every PHP file the php.ini is the php.ini.production file which came in the distribution. No edits were made. Setting the IIS PHP handler directly to PHP (not via fastcgi) c:\php\php-cgi.exe results in the following: display a PHP page with only text....works fine display a page with only phpinfo(); results in 404 not found

    Read the article

  • Centos IPTables configuration for external firewall

    - by user137974
    Current setup Centos which is a Web, Mail (Postfix,Dovecot), FTP Server and Gateway with public ip and private ip (for LAN Gateway). We are planning to implement external firewall box and bring the server to LAN Please guide on configuring IPTables... Unable to receive mail and outgoing mail stays in postfix queue and is sent after delaying... The local ip of the server is 192.168.1.220 iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP incoming HTTP iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT outgoing HTTP iptables -A OUTPUT -o eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT FTP iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 21 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 1024:65535 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 1024:65535 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT SMTP iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 25 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 1024:65535 -d 0/0 --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -s 0/0 --sport 25 -d 192.168.1.220 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT POP3 iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.1.220 --dport 110 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -s 192.168.1.220 --sport 110 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • E-mail hosting provider that can set up aliases with wildcards

    - by Richard Downer
    I am looking for an e-mail hosting provider that allow e-mail aliases containing wildcards. In more detail: I own my own domain. I want an e-mail hosting provider to manager e-mail for my domain. Now, to help deal with spam, I often give different e-mail addresses to different organisations. These e-mail addresses always start with the same prefix, but then differ. So, for example, I might give out these e-mail addresses: [email protected] [email protected] [email protected] I want to be able to go to the e-mail provider's control panel and set up an e-mail alias like this: [email protected] -- bounce/discard (because this address has been sold to spammers) joe-*@sample.com -- redirect to [email protected] What's I don't want to do is either have to set up every single e-mail address individually (because I make them up whenever I need them), nor do I want to have a general catch-all for any unrecognised address in my domain (because I don't want to be carpet-bombed with spam when a spammer runs a dictionary attack against my domain name.) Although this seems like a useful feature to have, it seems to be a little-known feature and I've not seen anybody advertise this feature. My current hosting provider offer this but I want to move away from them, so I need another provider that will continue to work with all the e-mail addresses I've been using for years. Alternatively, I could use mail server software that runs on Windows - I have seen some commercial packages offering this feature but they cost more than I can afford - are there any suggestions for low-cost software packages?

    Read the article

  • Multiple vlans access to shared pbx system

    - by Matt
    I'm new to networking and was looking for some assistance. First off I'm using packet tracer to diagram my scenario as I will be receiving my equipment next week to deploy. Hardware to be used: 2 catalyst 3560 switches all connect to a sonic wall router I have two companies that work in the same office space. I need to keep these companies separate on their own vlan. They will however need to share the phone system. (Packet tracer file uploaded to give those who have the time to see what I put together.) http://dl.dropbox.com/u/86234623/network%20build.pkt Here is my current test scenario: on switch 0 I have: company A on vlan 2 computers 172.16.1.100 and 101 255.255.0.0 FA0/10 FA0/11 company B on vlan 3 computers 172.16.2.102, 255.255.0.0 FA0/12 PBX on a trunk port 172.16.0.5, 255.255.0.0 FA0/5 trunk port on FA0/1 to connect the switches on switch 1 I have: company A on vlan 2 computers 172.16.1.102, 255.255.0.0 company B on vlan 3 computers 172.16.2.100 and 101, 255.255.0.0 trunk port on FA0/1 to connect the switches I can ping the respective computers on the same vlan but cant ping company A to B which is what I want. However neither company can talk (ping) the PBX. Here are the commands I used to configure what I have: switch 0 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 2 int fa0/11 switchport mode access switchport access vlan 2 int fa0/12 switchport mode access switchport access vlan 3 int fa0/5 switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 Switch 1 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 3 int fa0/11 switchport mode access switchport access vlan 3 int fa0/12 switchport mode access switchport access vlan 2 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3

    Read the article

  • Parsing the output of "uptime" with bash

    - by Keek
    I would like to save the output of the uptime command into a csv file in a Bash script. Since the uptime command has different output formats based on the time since the last reboot I came up with a pretty heavy solution based on case, but there is surely a more elegant way of doing this. uptime output: 8:58AM up 15:12, 1 user, load averages: 0.01, 0.02, 0.00 desired result: 15:12,1 user,0.00 0.02 0.00, current code: case "`uptime | wc -w | awk '{print $1}'`" in #Count the number of words in the uptime output 10) #e.g.: 8:16PM up 2:30, 1 user, load averages: 0.09, 0.05, 0.02 echo -n `uptime | awk '{ print $3 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $8,$9,$10 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 12) #e.g.: 1:41pm up 105 days, 21:46, 2 users, load average: 0.28, 0.28, 0.27 echo -n `uptime | awk '{ print $3,$4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $6,$7 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $10,$11,$12 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 13) #e.g.: 12:55pm up 105 days, 21 hrs, 2 users, load average: 0.26, 0.26, 0.26 echo -n `uptime | awk '{ print $3,$4,$5,$6 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $7,$8 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $11,$12,$13 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; esac

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Computer specs for a large database

    - by SpeksETC
    What sort of computer specs (CPU, RAM, disk speed) should I use for running queries on a database of 200+ million records? The queries are for a research project, so there is only one "user" and only one query will be running at a time. I tried it on my own laptop with SQL Server with an i3 processor, 2GB RAM, 5400 RPM disk and a simple query didn't finish even after 8+ hours. I have an option to connect a SSD via eSata and upgrade to 4GB RAM, but not sure if this will be enough... Thanks! Edit: The database is about 25 GB and the indexes are not setup properly. When I tried to add an index, I let it run for about 8 hours and it still hadn't finished so I gave up. Should I have more patience :)? In general, the queries will run once in a while and its ok even if it takes a couple hours to complete.... Also, the queries will produce probably about 10 million records which I need to process using Stata/Matlab and I'm concerned that my current laptop is not strong enough, but unsure of the bottleneck....

    Read the article

  • VRF Internet Gateway Multiple External IP's 1 Internal IP to AWS

    - by user223903
    Trying to setup VRF for the first time and its not working for me even though I keep reading everything online. IP's are different to real life. I have an Internet connection which I can ping to my router in the current setup below 195.45.73.22 I have a block of ip addresses 195.45.121.0/27 I want to setup multiple VPN's to AWS so need to have multiple external ip's thus the block of IP addresses. I have setup the 2nd and 3rd IP address but can not ping them from external. Any help would be grateful. Bryan ip source-route ! ip vrf Internet rd 1:1 route-target export 1:1 route-target import 1:1 ip vrf AWSSydney1 rd 2:2 route-target export 2:2 route-target import 2:2 route-target import 1:1 ip vrf AWSSydney2 rd 3:3 route-target export 3:3 route-target import 3:3 route-target import 1:1 ip cef no ip domain lookup no ipv6 cef multilink bundle-name authenticated interface FastEthernet0/0 description Vocus Internet no ip address speed 100 full-duplex interface FastEthernet0/0.1 encapsulation dot1Q 1 native ip address 195.45.73.22 255.255.255.252 interface FastEthernet0/0.2 encapsulation dot1Q 2 ip vrf forwarding AWSSydney1 ip address 195.45.121.1 255.255.255.224 interface FastEthernet0/0.3 encapsulation dot1Q 3 ip vrf forwarding AWSSydney2 ip address 195.45.121.2 255.255.255.224 interface FastEthernet0/1 description LAN_SIDE ip address 10.0.0.5 255.255.255.0 speed 100 full-duplex no mop enabled ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 195.45.73.21 ip route vrf Internet 0.0.0.0 0.0.0.0 195.45.73.21

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration (but even though it doesn't do it either). I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.mydomain.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts and haven't added them to any group: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked without a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 database Windows Server 2008 R2 Server Core is my domain controller that just serves domain features and nothing else I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • DRBD on a disk with existing file system that takes all the place

    - by Karolis T.
    I'm currently trying to simulate the environment via XEN. I have installed two debian systems with such FS layout: cltest1:/etc# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 6.0G 417M 5.2G 8% / tmpfs 257M 0 257M 0% /lib/init/rw udev 10M 16K 10M 1% /dev tmpfs 257M 4.0K 257M 1% /dev/shm Host cltest2 is identical. Here's my drbd.conf global { minor-count 1; } resource mysql { protocol C; syncer { rate 10M; # 10 Megabytes } on cltest1 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.186:7789; meta-disk internal; } on cltest2 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.187:7789; meta-disk internal; } } I have not created filesystem on drbd0 Starting DRBD via init.d script errors out with: Starting DRBD resources: [ d(mysql) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted. [mysql] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/xvda2 /dev/xvda2 internal --set-defaults --create-device failed - continuing! Running: drbdadm create-md mysql gives: cltest1:/etc# drbdadm create-md mysql md_offset 6442446848 al_offset 6442414080 bm_offset 6442217472 Found ext3 filesystem which uses 6291456 kB current configuration leaves usable 6291228 kB Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command 'drbdmeta /dev/drbd0 v08 /dev/xvda2 internal create-md' terminated with exit code 40 drbdadm aborting As I understand, all of my problems are because I don't have unallocated disk space on xvda2. What are my options besides shrinking FS and connecting a separate physical disk? Can't the meta-data be stored on a file in the local filesystem?

    Read the article

  • Apache 2.2, worker mpm, mod_fcgid and PHP: Can't apply process slot

    - by mopoke
    We're having an issue on an apache server where every 15 to 20 minutes it stops serving PHP requests entirely. On occasions it will return a 503 error, other times it will recover enough to serve the page but only after a delay of a minute or more. Static content is still served during that time. In the log file, there's errors reported along the lines of: [Wed Sep 28 10:45:39 2011] [warn] mod_fcgid: can't apply process slot for /xxx/ajaxfolder/ajax_features.php [Wed Sep 28 10:45:41 2011] [warn] mod_fcgid: can't apply process slot for /xxx/statics/poll/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php There is RAM free and, indeed, it seems that more php processes get spawned. /server-status shows lots of threads in the "W" state as well as some FastCGI processes in "Exiting(communication error)" state. I rebuilt mod_fcgid from source as the packaged version was quite old. It's using current stable version (2.3.6) of mod_fcgid. FCGI config: FcgidBusyScanInterval 30 FcgidBusyTimeout 60 FcgidIdleScanInterval 30 FcgidIdleTimeout 45 FcgidIOTimeout 60 FcgidConnectTimeout 20 FcgidMaxProcesses 100 FcgidMaxRequestsPerProcess 500 FcgidOutputBufferSize 1048576 System info: Linux xxx.com 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009 x86_64 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.04 DISTRIB_CODENAME=jaunty DISTRIB_DESCRIPTION="Ubuntu 9.04" Apache info: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:45:55 Server's Module Magic Number: 20051115:21 Server loaded: APR 1.2.12, APR-Util 1.2.12 Compiled using: APR 1.2.12, APR-Util 1.2.12 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Apache modules loaded: alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.load cgi.load deflate.load dir.load env.load expires.load fcgid.load headers.load include.load mime.load negotiation.load rewrite.load setenvif.load ssl.load status.load suexec.load PHP info: PHP 5.2.6-3ubuntu4.6 with Suhosin-Patch 0.9.6.2 (cli) (built: Sep 16 2010 19:51:25) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Unable to install PEM/pkcs12 created by gnutls to Cisco ASA

    - by ACiD GRiM
    I've been pulling some hair out trying to figure out why cisco devices don't like my certificates. My primary need is to get a trustpoint set up with CA,cert,key on the ASA for VPN systems, however I'm having the same issues on my IOS devices. I created a pkcs12 with openssl a few months ago that imported with no issues, but now that I'm getting ready to move this lab to production I'm using gnutls certtool as I found it adds alt_dns and ip_address fields properly to the certificate, (which cost me a few more hairs trying to get to work with openssl's ca tool) I'm including the current test certs below, don't worry I'm not using these in production ;) The maddening thing is that after I thought gnutls was generating certs incorrectly, I tried making a pkcs12 for a printserver and it imported with no issues. Here's my command flow for creating these certs: certtool --generate-privkey --disable-quick-random --outfile nn-ca.key certtool --generate-self-signed --load-privkey nn-ca.key --outfile nn-ca.crt certtool --generate-privkey --disable-quick-random --outfile nn-g0.key certtool --generate-certificate --load-privkey nn-g0.key --outfile nn-g0.crt --load-ca-privkey nn-ca.key --load-ca-certificate nn-ca.crt openssl pkcs12 -export -certfile nn-ca.crt -in nn-g0.crt -inkey nn-g0.key -out nn-g0.p12 openssl enc -base64 -in nn-g0.p12 -out nn-g0.base64.p12 The password for the attatched pkcs12 is "ciscohelp" without quotes. Thanks for any help TestCerts

    Read the article

  • Raid-1 Western Digital Green AARS, cloning and WD Align Utility

    - by Jaguar
    Hello all, My current setup runs on top of 2x Western Digital 2500KS drives on Raid-1, using the motherboard's 780G raid controller, on WinXP. Everything is fine, but the drives are a bit noisy. I am considering buying 2x WD6400AARS disks which are the 640GB slower 'green' drives, but also feature the Advanced Formatting 4KB sectors. This means that for WinXP the partition will have to be aligned to work properly, else there is a performance penalty. There are 2 questions here: The Green drives from WD are all slower and are (according to WD) susceptible to drop-out's from the controller. Has anyone any experience in this matter? Is there a possibility the controller will drop a drive? If so, can i do anything about it? Secondly, western digital gives a utility to perform the alignment on the partition. The thing is, will the utility see the drives in question as the operating system only sees 1 logical disk? I will be making the transition using a cloning tool (most probably norton ghost) unless i don't find a solution or a clear answer, in which case i'll just buy a win 7 license and make a clean install... thx in advance

    Read the article

  • multiple php compiler on single apache installation

    - by getmizanur
    elloo, i have some old php scripts which runs on php-5.2.x and the current server has php-5.3.x. to get around this problem,i have got two options one is to downgrade php-5.3.x or install php-5.2.x and php-5.3.x at the same time where php-5.2.x serve cgi script. i have decided go for the second option i have followed this tutorial and i can get most of it working however except execution of shell script which selects php-cgi version. i cannot get apache to execute this script. how do i get apache to execute #!/bin/sh # you can change the PHP version here. version="5.2.6" # php.ini file location, */php-5.2.6/lib equals */php-5.2.6/lib/php.ini. PHPRC=/etc/php/phpfarm/inst/php-${version}/lib/php.ini export PHPRC PHP_FCGI_CHILDREN=3 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_MAX_REQUESTS # which php-cgi binary to execute exec /etc/php/phpfarm/inst/php-${version}/bin/php-cgi my apache vhost.conf <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 </Directory> </VirtualHost> can some one tell me what am i doing wrong? thanks in advance. solution: if i did a2dismod php5 then the above configuration worked. when a2enmod php5 had been activated, apache was executing php5.3 instead of php5.2 even after telling apache to execute php5.2 shell script. to solve my problem, i had to change my virtualhost configuration <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www DirectoryIndex index.php <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 <FilesMatch "\.php"> SetHandler php-cgi </FilesMatch> </Directory> </VirtualHost> presto, it started working.

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >