Search Results

Search found 15646 results on 626 pages for 'port 80'.

Page 368/626 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • Nagios escalation debugging

    - by Oesor
    I'm having some issues with escalations happening properly and I'm not sure if it's because of my config or because the nagios binary is nonstandard and something may be broken. I've got little experience with nagios, and just want to make sure this is being set appropriately. Should the following config file definition allow the escalations to take over and increment the notification interval as expected? Is there somewhere else in the config files I should be looking at to figure out what's going on? I've enabled debug 32 in the config and it's simply spitting out 'Host notification will NOT be escalated.' for each notification. The configuration does pass the pre flight check with no issues, and reports that it's parsing the three host escalations in the config. # test host definition define host { host_name test alias test address 10.0.0.10 hostgroups test check_interval 0 retry_interval 1 max_check_attempts 2 flap_detection_enabled 0 icon_image windows.png icon_image_alt LOGO - Windows vrml_image windows.png statusmap_image windows.png action_url /info/host/275 check_period 24x7 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master check_command check_host_15!-H $HOSTADDRESS$ -t 3 -w 500.0,80% -c 1000.0,100% parents nagios notifications_enabled 1 notification_interval 3 notification_period 24x7 notification_options u,d,r use host-global } define hostescalation{ host_name test first_notification 3 last_notification 4 notification_interval 10 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 4 last_notification 5 notification_interval 30 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 5 last_notification 0 notification_interval 240 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master }

    Read the article

  • How can I get Solr listening on 0.0.0.0 instead of just localhost?

    - by Neil
    I'm trying to get Solr to listen on 0.0.0.0 instead of just localhost, and it doesn't seem to be picking up the configuration options. I downloaded apache-solr-1.4.1 from the Solr website, and I'm running: user@:apache-solr-1.4.1/example $ java -jar start.jar With these configuration options: <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.bio.SocketConnector"> <Set name="host"><SystemProperty name="jetty.host" default="0.0.0.0" /></Set> <Set name="port"><SystemProperty name="jetty.port" default="8983" /></Set> <Set name="maxIdleTime">50000</Set> <Set name="lowResourceMaxIdleTime">1500</Set> </New> </Arg> </Call> Where the only line changed from the default is this one: <Set name="host"><SystemProperty name="jetty.host" default="0.0.0.0" /></Set> And when I check netstat, I see this: $ netstat -an | egrep 'Proto|\b8983\b' Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:8983 0.0.0.0:* LISTEN tcp6 0 0 ::1:8983 :::* LISTEN Where Local Address should be 0.0.0.0:8983 instead of 127.0.0.1:8983. Does anyone know why this might not be working?

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Routing Traffic With OpenVPN

    - by user224277
    Few minutes ago i configured my VPN server, and actually I can connect to my VPN but all trafic is going through my normal home network. On my OpenVPN application I've got an information : Server IP: **.185.***.*10 Client IP: 10.8.0.6 Traffic: 7.3 KB in, 5.6 KB out Connected: 10 June 2014 19:21:59 So everything is connected but how I can setup on windows 7 that all trafic have to go through OpenVPN network card ?? Client setting : client dev tun proto udp # enter the server's hostname # or IP address here, and port number remote **.185.***.*10 1194 resolv-retry infinite nobind persist-key persist-tun # Use the full filepaths to your # certificates and keys ca ca.crt cert user1.crt key user1.key ns-cert-type server comp-lzo verb 6 Server setting : port 1194 proto udp dev tun # the full paths to your server keys and certs ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh2048.pem cipher BF-CBC # Set server mode, and define a virtual pool of IP # addresses for clients to use. Use any subnet # that does not collide with your existing subnets. # In this example, the server can be pinged at 10.8.0.1 server 10.8.0.0 255.255.255.0 # Set up route(s) to subnet(s) behind # OpenVPN server push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" ifconfig-pool-persist /etc/openvpn/ipp.txt keepalive 10 120 status openvpn-status.log verb 6 and sysctl : net.ipv4.ip_forward=1 Thank you for your time and help.

    Read the article

  • Need recommendation: which USB card (PCMCIA) works with external hard drive?

    - by Carl
    I have an HT-Link Cardbus/PCMCIA USB 2.0 2-port card (NEC / 32-bit). My external hard drive w/USB adapter won't work with it, and it will work plugged directly into a USB port on a different laptop. (My USB ports got fried.) I got the card off E-Bay. My MP3 player works plugged into that card. The drivers for the card say "Known limitations: High Speed Isochronus, USB Composite Devices." (No other details provided.) I don't know if the hard drive adapter is "isochronous" or "composite." I've read there are problems with too little power being supplied to the drive. The cable to the drive has two USB plugs on one end, and it doesn't make any difference if I plug both of them into the Cardbus card. What card should I get? I see many different brands on E-Bay. I need one that supplies sufficient power for an external hard drive, and doesn't have any "known limitations" in the way.

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

  • setting up vpn server

    - by Lock
    I need help in visualising how to setup our VPN box when we move to our new network with Telstra. We have a safe@office 500P, which has a public IP and a private IP of 192.168.19.2. It is physically connected to our router, which has 4 different interfaces, one being 192.168.19.1. On the VPN box, we have a static route to forward everything to 192.168.19.1 which is the router, and from there it works out where to go. Now, we are moving to a Telstra WAN and things are setup a little differently. Our head office router has only 3 interfaces- 1 is for the link to the switch that has the fibre connection (so our route to the internet and other branches), 1 is for our 10.10.20.x network and one is for the local branch network. I really have no idea how to set this up as with the new setup, we will not have a port for it to plug into on the router. Could I just plug it into the 10.10.20.x network? Would I have to give it a public IP or can we just forward through the ports that it would use? Another suggestion was to VLAN our switch into two networks- one for the 10.10.20.x network and one for the network the VPN currently sits on (192.168.19.x), and setup the router to trunk between the port and the switch. Not sure how to do this. Sorry VPN's are definitely not my strong suit. Any advice appreciated!

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Friendly Intranet Addresses

    - by Jmyster
    Relativly new to IIS. I'm attempting to set up multiple sites in my Intranet on one server. The server already has SharePoint Installed on it and has a binding *:80. So when I type //ServerName I get the home page of SharePoint. I get how that works. I set up a new site in IIS and set the Binding to *:30015. On a remote machine if I type //ServerName:30015 in a web browser, I get the new site. Awesome, working as intended. My Questions: Can/How do i set it up so that I can type //DivisionAppName or //Division.AppName and have it resolve itself to //ServerName:30015? Is this something I have to register with my Company's DNS server? I hope not, getting my corprate IT to assist is a nightmare. What I tried: I have added Bindings with the Host Name filled in with both DivisionAppName or Division.AppName and port 30015 but that doesn't seem to work.

    Read the article

  • Strange 3-second tcp connection latencies (Linux, HTTP)

    - by user25417
    Our webservers with static content are experiencing strange 3 second latencies occasionally. Typically, an ApacheBench run ( 10000 requests, concurrency 1 or 40, no difference, but keepalive off) looks like this: Connection Times (ms) min mean[+/-sd] median max Connect: 2 10 152.8 3 3015 Processing: 2 8 34.7 3 663 Waiting: 2 8 34.7 3 663 Total: 4 19 157.2 6 3222 Percentage of the requests served within a certain time (ms) 50% 6 66% 7 75% 7 80% 7 90% 9 95% 11 98% 223 99% 225 100% 3222 (longest request) I have tried many things: - Apache2 2.2.9 with worker or prefork MPM, no difference (with KeepAliveTimeout 10-15) - Nginx 0.6.32 - various tcp parameters (net.core.somaxconn=3000, net.ipv4.tcp_sack=0, net.ipv4.tcp_dsack=0) - putting the files/DocumentRoot on tmpfs - shorewall on or off (i.e. empty iptables or not) - AllowOverride None is on for /, so no .htaccess checks (verified with strace) - the problem persists whether the webservers are accessed directly or through a Foundry load balancer Kernel is 2.6.32 (Debian Lenny backports), but it occurred with 2.6.26 also. IPv6 is enabled, but not used. Does the issue look familiar to anyone? Help/suggestions are much appreciated. It sounds a bit like a SYN,ACK packet getting lost or ignored.

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Apache Virtualhosts with PHP and custom logs - how to isolate PHP errors?

    - by Repox
    I'm trying to setup a simple hosting enviroment for my application on an Ubuntu server. I created a virtualhost like this: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ServerAlias example.com DocumentRoot /home/owner/example.com/docs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/owner/example.com/docs/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/owner/example.com/logs/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /home/owner/example.com/logs/access.log combined php_flag log_errors on php_value error_log /home/owner/example.com/logs/php-error.log </VirtualHost> Now, my problem is that PHP errors and warnings are thrown in the error.log - not the php-error.log as I was hoping. How can achieve this?

    Read the article

  • Shell replacement for kiosk environment with desktop icons

    - by tuesprem
    I have to deploy around 80 touch-screen terminals for my company. We're going to run several Windows applications on those terminals and they'll be operated by touch input only. I basically want those machines to run in a kiosk mode, so employees won't be able to launch any programs than the ones they're supposed to (also, this will hide the task bar/start menu, which will give it a much nicer, corporate look). There's usually one app that should automatically launch on startup and other apps that can be launched from the desktop. I've tried to accomplish this (under Windows 7 Pro) with a custom shell replacement. This removes the task bar/start menu like I described earlier. I used an AutoHotKey script for this, because it's easy to, for instance, define hotkeys so admins will still be able to launch stuff like Windows Explorer (okay, I know I could just launch it from the task manager [Ctrl+Shift+Esc] but I'm considering locking that as well). So far so good. Some problems though: No desktop icons. Needed to launch apps from the desktop. No wallpaper. Required for CI. Is there any way around this? I wouldn't mind using another shell replacement, as long as it's reasonably lightweight. I'd love to get the wallpaper and desktop icons to show with something like AutoHotKey though. Thanks!

    Read the article

  • Linux 'top' utility widly inaccurate (more so for multi-CPU/core hardware)?

    - by amn
    Hi all. After using 'top' for long time, albeit basically, I have grown to distrust it's '% CPU' column reports. I have a 8-core (quad core Intel i7 920 with hyperthreading) hardware, and see some wild numbers when running a process that should not use more than 5% overall. top happily reports 50%, and I suspect it is not so. My question is, is it a known fact that it's inaccurate when several CPUs/cores are present? I used 'mpstat' from the 'sysstat' package, and it's showings are much more conservative, certainly within my expectations. I did press '1' for 'top' to switch it to show all the core and us/sy/io stats, but the numbers are substantially higher than with 'mpstat'... I know that my expectations can be unfound as well, but my gut feeling tells me 'top' is wrong! :-) The reason I need to know is because the process I am monitoring only guarantees quality of service with CPU usage "less than 80%" (however vague that sounds), and I need to know how much headroom I have left. It's a streaming server.

    Read the article

  • Can't get PHP to work with my Nginx virtual host. Keeps returning "No input file specified"

    - by steve
    I'm trying to get phpmyadmin up and running on my server. Here's the nginx vhost for it: server { listen 80; server_name server.mydomain.net; location /phpmyadmin/ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; alias /usr/share/phpmyadmin/; } root /opt/nginx/html/; } Here's my fastcgi_params file fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; I compiled lighthttpd so I could pull out spawn-fcgi. That is now sitting in /usr/local/bin and is accompanied by my php5-cgi launcher which looks like: #!/bin/sh /usr/local/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -u www-data -C 2 -f /usr/bin/php5-cgi I run this and can see that it's successfully launched by doing a ps aux | grep php. However, whenever I try to open phpmyadmin, I get the error "No input file specified" What am I doing wrong? :/

    Read the article

  • How to setup IIS 7.5 Reverse Proxy for quite a few internal servers - Server Farm?

    - by Tim Murphree
    I have tried for a few days, but I'm lost. Here's what I'm trying to do: I want to setup an IIS 7.5 as a Reverse Proxy for about 30 internal HTTP servers, located on my internal LAN. Everything is running on port 80. The internal servers are really IP based webcams. Here is scenario: www.mycamserver.com/cam1 192.168.1.101 www.mycamserver.com/cam2 192.168.1.102 and so on, until.. www.mycamserver.com/cam30 192.168.1.130 I have installed ARR and URL Rewrite. So far, I have managed, at one time, to seem to forward an incoming URL to an internal server, but the page would not fully load (error 404). Also, I setup a Server Farm, but it seems all traffic is now set to the first node on the Server Farm (192.168.1.101). However, at least the page loads and runs correctly. I simply want to do an exact match, for example, "cam14", and reverse-proxy / rewrite to a corresponding internal server address - "192.168.1.114".

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • My Ubuntu 10.04 server kills all WAN bandwidth when it's attached to my LAN. Where do you begin troubleshooting?

    - by rrc7cz
    First I should say that my Linux knowledge is minimal; just enough to set up some servers (Apache, Tomcat, Couch, etc). I built a MiniITX server to host some simple sites, act as an SSH tunnel while I'm away, and act as a torrent server. It was not properly secured for a long time (iptables was empty, all ports open, no firewall) though my router did not have much port forwarding set up beyond HTTP, FTP, and SSH. A week or two ago my bandwidth at home dropped from around 27Mbps to 2Mbps and my upload went from 7Mbps to 0.06Mbps. When I unplug the server from the LAN, by bandwidth shoots back up. I threw up a restrictive iptables, removed most of the port forwarding, and checked my router logs to see if there were any open connections from the server (malware?) but there were none. What would you do? What are the first things you'd check? I can of course reinstall everything from scratch, but I'd like to find the root cause.

    Read the article

  • Virtual host Alias not routing properly

    - by Jacob
    I apologize if this question has been asked many times in the past. I am not 100% sure of the exact cause of my issue and am out of google magic right now. Basically I have a virtual host file setup with an Alias record that points to a different directory other the document root. It basically looks like this <VirtualHost *:80> ServerName iBusinessCentral.com ServerAdmin [email protected] DocumentRoot /var/www/marketingsites/ ServerAlias iBusinessCentral.com *.iBusinessCentral.com Alias /unsub "/var/www/unsub/site_index/" </VirtualHost> When I navigate to ibusinesscentral.com/unsub/?randomquerystring I am directed to the correct folder. If I remove the query string and navigate to ibusinesscentral.com/unsub/ I am taken to the directory in the document root. The unsub directory is a zend application and I need to be able to navigate to different url paths like ibusinesscentral.com/unsub/unenroll?querystring I have tried using AliasMatch instead of Alias. I have also tried adding a slash after the unsub portion of the Alias record, and have not had any luck to this point. Thanks in advance for any assistance

    Read the article

  • Cannot browse Visual SVN Repositories

    - by user1783560
    I went through Unable to browse repository after setting visual SVN Server and tried to implement the answer suggested but even the IP address is not solving my problem. I have 1) Turned off firewall 2) Set the directory C:\Program Files (x86)\VisualSVN Server security rights to "full control" for all groups including "Network service" group. 3) Set the directory D:\Repositories security rights to "full control" for most groups including "Network service" group. 4) Tried switching between secure and non-secure connection option in visual svn server manager. 5) Tried different with port numbers. Including 443 (with https) and 80 (http). Also tried giving random ports with/without https. Using computer name/ IP address in the url doesn't help. Windows network diagnostics - Troubleshooting report has following message for me. Security policy settings or firewall settings on this computer might be blocking the connection. = Detected Can someone guide me please? I have spent more than two weeks on this now without any solution. Thanks in advance! I appreciate your responses.

    Read the article

  • MySQL InnoDB/socket issue on Mac OS X 10.6.4

    - by user55217
    I have an ongoing issue on my Macbook Pro OS X 10.6.4. Intermittently, my MySQL install will not create a socket on startup. Rebooting sometimes, but not always, solves the problem. Deleting the ib* files in /usr/local/mysql/data and then restarting sometimes, but not always, solves the problem. My error logs tell me the following: Plugin InnoDB init function returned error Plugin InnoDB registration as a STORAGE ENGINE failed Can't start server: Bind on TCP/IP port: Address already in use Do you already have another mysqld server running on port: 3306? Aborting It then appears to attempt to start again and generates this error 20 - 30 times: Unable to lock ./ibdata1, error 35 Check that you do not already have another mysqld process using the same InnoDB data or log files Though the socket file is not created, I can connect to my MySQL db directly over localhost. Although, this does not help me from a PHP standpoint. Any thoughts on what I can do to resolve the issue or debug further? I'm at a loss as to where to go from here.

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >