Search Results

Search found 6637 results on 266 pages for 'usr'.

Page 223/266 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • Apache is sending php files to my browser instead of parsing

    - by justen doherty
    I have to set up PHP on an existing web host. I have made a virtual host entry, but for some reason Apache is sending the PHP to the browser instead of parsing.. from googling around it looks like it's a problem with the mimetypes, but I'm not an Apache expert by any means, so if anyone could help it would be appreciated... I have the following in my httpd.conf: AddHandler php5-script php DirectoryIndex index.html index.phtml index.php index.phps AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps The PHP module is loaded into Apache: /usr/sbin/apachectl -M Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_dbm_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_dbm_module (shared) authz_default_module (shared) ldap_module (shared) authnz_ldap_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) dav_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) dav_fs_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) proxy_module (shared) proxy_balancer_module (shared) proxy_ftp_module (shared) proxy_http_module (shared) proxy_connect_module (shared) cache_module (shared) suexec_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) version_module (shared) fcgid_module (shared) perl_module (shared) php5_module (shared) proxy_ajp_module (shared) ssl_module (shared) And this is my virtual host entry: <VirtualHost 10.16.140.113:80> ServerName viridor-cms.co.uk ServerAlias www.viridor-cms.co.uk UseCanonicalName Off DocumentRoot /var/www/vhosts/viridor-cms.co.uk/httpdocs CustomLog /var/www/vhosts/viridor-cms.co.uk/cms-access_log common ErrorLog /var/www/vhosts/viridor-cms.co.uk/cms-error_log DirectoryIndex index.php index.html <IfModule sapi_apache2.c> php_admin_flag engine on php_admin_flag safe_mode on </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode on </IfModule> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps </VirtualHost> Please help, my head is so sore from banging it against the table and the wall!

    Read the article

  • Apache Alias Isn't In Directory Listing

    - by Phunt
    I've got a site running on my home server that's just a front end for me to grab files remotely. There's no pages, just a directory listing (Options Indexes...). I wanted to add a link to a directory outside of the webroot so I made an alias. After a minute of dealing with permissions, I can now navigate to the directory by typing the URL into the browser, but the directory isn't listed in the root index. Is there a way to do this without creating a symlink in the root? Server: Ubuntu 11.04, Apache 2.2.19 Relevant vhost: <VirtualHost *:80> ServerName some.url.net DocumentRoot "/var/www/some.url.net" <Directory /var/www/some.url.net> Options Indexes FollowSymLinks AllowOverride None Order Allow,Deny Allow From All AuthType Basic AuthName "TPS Reports" AuthUserFile /usr/local/apache2/passwd/some.url.net Require user user1 user2 </Directory> Alias /some_alias "/media/usb_drive/extra files" <Directory "/media/usb_drive/extra files"> Options Indexes FollowSymLinks Order Allow,Deny Allow From All </Directory> </VirtualHost>

    Read the article

  • Apache2 config variable is not defined

    - by Kurt Bourbaki
    I installed apache2 on ubuntu 13.10. If I try to restart it using sudo /etc/init.d/apache2 restart I get this message: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message So I read that I should edit my httpd.conf file. But, since I can't find it in /etc/apache2/ folder, I tried to locate it using this command: /usr/sbin/apache2 -V But the output I get is this: [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Nov 29 17:35:43.942560 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Nov 29 17:35:43.942602 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Nov 29 17:35:43.942613 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Nov 29 17:35:43.942627 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.947913 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948051 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948075 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} Line 74 of /etc/apache2/apache2.conf is this: Mutex file:${APACHE_LOCK_DIR} default I gave a look at my /etc/apache2/envvar file, but I don't know what to do with it. What should I do?

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Mysql Cluster not working on Ubuntu

    - by user53864
    I am unable to setup MySQL Cluster on ubuntu servers. As a starting point I started from the link but I am not successful and the tar ball version I download is 6.3.45. As I wanted to test the mysql cluster, the Data node and SQL node are same but sql never appeared as connected in management node console and it looks like below. [ndbd(NDB)] 2 node(s) id=2 @192.168.1.107 (Version: version number, Nodegroup: 0, Master) id=3 @192.168.1.108 (Version: version number, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.1.105 (Version: version number) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from 192.168.1.107) id=5 (not connected, accepting connect from 192.168.1.108) On all the 3 machines mysql-server & client(apt-get install mysql-server mysql-client) were already installed and I completely stopped and also removed them at the system start up. Now the mysqld is from extracted cluster tar ball(/usr/local/mysql/support-files/mysql.server). As for testing, I created a test database on both the data nodes but the tables are also not syncing on other node. I checked many links, configurations are remained similar in all the links but somewhere it's going wrong. Anymore extra package is required?, Could anyone help me here..?. I am trying this for past 3 days... Thank you!

    Read the article

  • Trying to build OpenSimulator.. nant fails

    - by Gary
    Output of nant is: Buildfile: file:///root/opensim-0.6.8-release/OpenSim.build Target framework: Mono 2.0 Profile Target(s) specified: build [echo] Using 'mono-2.0' Framework init: Debug: [echo] Platform unix build: [nant] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build build Buildfile: file:///root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build Target framework: Mono 2.0 Profile Target(s) specified: build build: [echo] Build Directory is /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug [csc] Compiling 29 files to '/root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug/OpenSim.Framework.Servers.HttpServer.dll'. [csc] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/AsynchronousRestObjectRequester.cs(103,41): error CS0246: The type or namespace name `TResponse' could not be found. Are you missing a using directive or an assembly reference? [csc] Compilation failed: 1 error(s), 0 warnings BUILD FAILED - 0 non-fatal error(s), 1 warning(s) /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build(14,6): External Program Failed: /usr/lib/pkgconfig/../../lib/mono/2.0/gmcs.exe (return code was 1) Total time: 1.2 seconds. BUILD FAILED Nested build failed. Refer to build log for exact reason. Total time: 1.3 seconds. OS is Fedora 7. Any ideas appreciated. :)

    Read the article

  • Dynamic virtual host configuration in Apache

    - by Kostas Andrianopoulos
    I want to make a virtual host in Apache with dynamic configuration for my websites. For example something like this would be perfect. <VirtualHost *:80> AssignUserId $domain webspaces ServerName $subdomain.$domain.$tld ServerAdmin admin@$domain.$tld DocumentRoot "/home/webspaces/$domain.$tld/subdomains/$subdomain" <Directory "/home/webspaces/$domain.$tld/subdomains/$subdomain"> .... </Directory> php_admin_value open_basedir "/tmp/:/usr/share/pear/:/home/webspaces/$domain.$tld/subdomains/$subdomain" </VirtualHost> $subdomain, $domain, $tld would be extracted from the HTTP_HOST variable using regex at request time. No more loads of configuration, no more apache reloading every x minutes, no more stupid logic. Notice that I use mpm-itk (AssignUserId directive) so each virtual host runs as a different user. I do not intend to change this part. Since now I have tried: - mod_vhost_alias but this allows dynamic configuration of only the document root. - mod_macro but this still requires the arguments of the vhost to be declared explicitly for each vhost. - I have read about mod_vhs and other modules which store configuration in a SQL or LDAP server which is not acceptable as there is no need for configuration! Those 3 necessary arguments can be generated at runtime. - I have seen some Perl suggestions like this, but as the author states $s->add_config would add a directive after every request, thus leading to a memory leak, and $r->add_config seems not to be a feasible solution.

    Read the article

  • nginx proxy_pass POST 404 errors

    - by Scott
    I have nginx proxying to an app server, with the following configuration: location /app/ { # send to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } Any request for /app goes to :9001, whereas the default site is hosted on :9000. GET requests work fine. But whenever I submit a POST request to /app/any/post/url it results in a 404 error. Hitting the url directly in the browser via GET /app/any/post/url hits the app server as expected. I found online other people with similar problems and added proxy_set_header Host $http_host; but this hasn't resolved my issue. Any insights are appreciated. Thanks. Full config below: server { listen 9000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/scott/src/ph-dox/html; # root ../html; TODO: how to do relative paths? index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /app/ { # rewrite here sends to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } }

    Read the article

  • Linux Mint reset display resolution from console

    - by wullxz
    I have a Linux Mint 13 Xfce in a VMware Workstation 8 VM and set the resolution from 800x600 to 1280x768 and now I get permanently logged out when I try to login. I knew how to get back to my old resolution back in the xorg.conf days but Linux Mint now uses xrandr which won't display any displays when running # xrandr because X is not running (of course not - I can't login over GUI). I know that there are configuration files in /etc/X11/Xsession.d/ because I configured a debian based thinclient's resolution in a file called /etc/X11/Xsession.d/91configure_display but that file doesn't exist in my Linux Mint VM. So, how do I reset my X screen resolution from console? Edit: I forgot to tell you that I can't change resolution in console: # xrandr -s 800x600 Can't open display This message appears every time I use xrandr or xrandr -s *resolution* Update: I tried what bWowk suggested: # export DISPLAY=:0.0 # xrandr -s 800x600 No protocol specified No protocol specified Can't open display :0.0 So, that doesn't work either. Isn't there a configuration file that is executed every time X starts? X is running btw - ps aux | grep X shows one process /usr/bin/X running.

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Installing mod_mono on Ubuntu: handler doesn't seem to get registered

    - by Trevor Johns
    I'm trying to install mod_mono on Apache 2 (Prefork MPM). I'm using Ubuntu Karmic, and just want an auto-hosting setup (so that any .aspx files are executed, similar to how PHP is normally setup). I did the following to install Mono: $ apt-get install libapache2-mod-mono mono-apache-server2 mono-devel $ a2dismod mod_mono $ a2enmod mod_mono_auto I've confirmed that mod_mono is getting loaded by Apache. However, any .aspx pages I try to load are returned unprocessed and still have an application/x-asp-net MIME type. It's as if the mod_mono handler never gets registered with Apache. Here's the contents of /etc/mod_mono_auto.load: LoadModule mono_module /usr/lib/apache2/modules/mod_mono.so And here's /etc/mod_mono_auto.conf: MonoAutoApplication enabled AddType application/x-asp-net .aspx AddType application/x-asp-net .asmx AddType application/x-asp-net .ashx AddType application/x-asp-net .asax AddType application/x-asp-net .ascx AddType application/x-asp-net .soap AddType application/x-asp-net .rem AddType application/x-asp-net .axd AddType application/x-asp-net .cs AddType application/x-asp-net .config AddType application/x-asp-net .dll DirectoryIndex index.aspx DirectoryIndex Default.aspx DirectoryIndex default.aspx I've even tried setting the handler explicitly: AddHandler mono .aspx .ascx .asax .ashx .config .cs .asmx .asp Nothing seems to help. Any ideas how to get this working?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Monit - script not being executed as expected

    - by Thaís
    I'm working on a Windows 7 machine, having a VM for Ubuntu (image disk: 12.04-desktop-i386.iso). On the VM I installed Monit 5.3.2, and configured some processes and applications. So I created a script to run my application. This application should display some content on the screen (Im basically displaying two images, using Feh). The thing is: if I call my script through the command line, it runs ok, and display the images. But if I run through monit, it seems to be running ok, but it doesn't display the images.In the case I try to debug it (remote debug), then I can see the images. So I was supposing this could be some kind of configuration, but didn't find out what (even using the option -I wou'ldn't work). I"m showing below more details: -Piece of script on Monit---- check program runMediaHandler with path "/usr/bin/runMediaHandler.sh" if status == 1 then alert -runMediaHandler.sh ---- #!/bin/bash java -jar /home/thais/Desktop/MediaHandler_RC2.jar Summarizing: 1.What works: if I run java directly: java -jar /home/thais/Desktop/MediaHandler_RC2.jar if I run the script directly: runMediaHandler.sh if I remote debug putting a breakpoint where the image should be displayed 2.What does not work: putting that piece of information on Monit to "check program", writen above (even if calling monit -I start runMediaHandler) Thank you in advance, Thaís

    Read the article

  • no disk io but iowait very high

    - by Dan
    there is no disk io going results of iotop Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO< COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init [3] 1930 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1931 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1932 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1933 be/4 named 0.00 B/s 0.00 B/s 0.00 % 0.00 % named -u ~d/run-root 1810 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sh /usr/b~user=mysql 9795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 8004 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 3226 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % tlsmgr -l -t unix -u 8154 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 9759 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % find -name php.ini 9249 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd 1756 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1863 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 3123 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % crond 1758 be/4 postfix 0.00 B/s 0.00 B/s 0.00 % 0.00 % psa-pc-re~@localhost 1865 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld --~mysql.sock 1592 be/4 sw-cp-se 0.00 B/s 0.00 B/s 0.00 % 0.00 % sw-cp-ser~ver/config 7612 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd: root@pts/0 7614 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sftp-server 7615 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % -bash 1602 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % sshd 8003 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % httpd but iowait very high ? iostat report avg-cpu: %user %nice %system %iowait %steal %idle 0.83 0.00 0.13 13.83 0.00 85.20 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn server runs like a snail what could be wrong here ? thanks

    Read the article

  • Ubuntu Server: Networking fails with MODPROBE option in /etc/network/interfaces ... ??

    - by neezer
    For some reason (which I haven't been able to determine yet), yesterday morning the networking service on our web server (running Ubuntu 8.04.2 LTS -- hardy) wouldn't start, and our website went down. I noticed the following error message when trying to restart it: * Reconfiguring network interfaces... /etc/network/interfaces:6: option with empty value ifup: couldn't read interfaces file "/etc/network/interfaces" ...fail! Line 6 in the /etc/network/interfaces file concerned a MODPROBE command, which (I believe) loaded in the ip_conntrack_ftp module so that I could use PASV on my FTP server (vsftpd): (breaking modprobe commands commented out below) # Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or # /usr/share/doc/ifupdown/examples for more information. # The loopback network interface auto lo iface lo inet loopback #MODPROBE=/sbin/modprobe #$MODPROBE ip_conntrack_ftp pre-up iptables-restore < /etc/iptables.up.rules # The primary network interface # Uncomment this and configure after the system has booted for the first time auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask 255.255.255.0 gateway xxx.xxx.xxx.1 dns-nameservers xxx.xxx.xxx.4 xxx.xxx.xxx.5 I've verified that there is a file in /sbin called modprobe. Like I said earlier, this setup had been working flawlessly until yesterday morning (though my bosses say that the site actually went down the previous night at 11 PM EST). Can anyone shed some light on (A) why this broke, and (B) how can I re-enable the ip_conntrack_ftp module?

    Read the article

  • How to remove all Couchdb versions in Ubuntu 10.04 (server)? ( after multiple installs )

    - by DjangoRocks
    Hi all, I have done multiple installs of CouchDB using sudo aptitude install couchdb sudo ap-get install couchdb and more recently based on the instructions found at L http://wiki.apache.org/couchdb/Installing_on_Ubuntu May I know how do I uninstall or remove all the above installations? Best Regards. +++++++++++++++++++UPDATE++++++++++++++++++++++++ I've tried running the following commands: apt-get remove couchdb apt-get purge couchdb but received the following errors: (Reading database ... 39814 files and directories currently installed.) Removing couchdb ... invoke-rc.d: initscript couchdb, action "stop" failed. dpkg: error processing couchdb (--remove): subprocess installed pre-removal script returned error exit status 1 invoke-rc.d: initscript couchdb, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: couchdb E: Sub-process /usr/bin/dpkg returned an error code (1) May I know how do i fix this? ON issuing the command : dpkg -l | grep couchdb I received the following response: rF couchdb 0.10.0-1ubuntu2 RESTful document oriented database, system D iF couchdb-bin 0.10.0-1ubuntu2 RESTful document oriented database, programs How do i uninstall CouchDB ? I think there's some file corruption?

    Read the article

  • VMware Virtual Infrastructure Web Access not starting on Fedora 14

    - by FusionHammer
    I am running a Fedora 14 server with VMware Server 2. When I initially installed VMware Server, everything was working fine. This included accessing Web Access on port 8333 (https) and 8222 (http). I had to perform a server restart and afterwards, the Web Access is not starting when I run: /etc/init.d/vmware-mgmt start The VMware Server Host Agent starts [OK], but the Web Access has no [OK] or [Failed] next to it. I also checked ports 8333 and 8222 by running a netstat, but neither are showing up. In addition, I checked the proxy.log and client.log files in the following directory: /usr/lib/vmware/webAccess/tomcat/apache-tomcat-6.0.16/logs/ The proxy.log doesn't have entries in it past 9-18-2012, which is the date the server restart occurred. The client.log is empty. I not sure what is causing this issue or what log files will be helpful in narrowing down the cause. I could re-install VMware Server, but I would like to narrow down the cause as it could occur again with the new re-installed environment.

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ] - [ERROR] Plugin 'InnoDB' init function returned error

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • check_snmp with snmpv3 protocol giving "Unkown Report message" error

    - by John
    I'm trying to add a nagios command to use snmpv3 for monitoring printer status messages. When using the check_snmp command, I get the following error: External command error: snmpget: Unknown Report message Here is the command I'm typing in: ./check_snmp -P 3 -H <hostname> -L authPriv -U snmpuser -A snmppassword -X snmppassword -o 1.3.6.1.4.1.11.2.4.3.1.2.0 -C public -d "STRING:" -a MD5 These values for auth key, private key, username, etc all work when using snmpwalk. Can someone enlighten me as to what that error message really means? EDIT: It looks like check_snmp isn't taking my v3 credentials when passing over to snmpget. Here is my input with the verbose option: ./check_snmp -H <hostname> -o 1.3.6.1.2.1.2.2.1.10.1 -C public -m ALL -P 3 -L authPriv -U snmpuser -a MD5 -A snmppassword -x DES -X snmppassword -v And here is the output: /usr/bin/snmpget -t 1 -r 5 -m ALL -v 3 [authpriv] <hostname>:161 1.3.6.1.2.1.2.2.1.10.1 External command error: snmpget: Unknown Report message So I guess now my question would be: why isn't check_snmp passing all the commandline options to snmpget?

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • Running Jackd on Ubuntu for my External Firewire Sound card

    - by Asaf
    Hello, I'm running Ubuntu 10.04 and I have an external Sound card: Phonic Firefly 302. I've connected the device, installed Jackd, added the lines: @audio - rtprio 99 @audio - memlock 500000 @audio - nice -10 to /etc/security/limits.conf logged out, logged back in, ran qjackctl (sudo qjackctl to be exact), ran the settings and chose "firewire" on the driver option, pressed "Start" and that was the output: 20:10:19.450 Patchbay deactivated. 20:10:19.578 Statistics reset. 20:10:19.601 ALSA connection graph change. 20:10:19.828 ALSA connection change. 20:10:21.293 Startup script... 20:10:21.293 artsshell -q terminate sh: artsshell: not found 20:10:21.695 Startup script terminated with exit status=32512. 20:10:21.695 JACK is starting... 20:10:21.695 /usr/bin/jackd -dfirewire -r44100 -p1024 -n3 jackd 0.118.0 Copyright 2001-2009 Paul Davis, Stephane Letz, Jack O'Quinn, Torben Hohn and others. jackd comes with ABSOLUTELY NO WARRANTY This is free software, and you are welcome to redistribute it under certain conditions; see the file COPYING for details 20:10:21.704 JACK was started with PID=22176. no message buffer overruns JACK compiled with System V SHM support. loading driver .. libffado 2.0.0 built Mar 31 2010 14:47:42 firewire ERR: Error creating FFADO streaming device cannot load driver module firewire no message buffer overruns 20:10:21.819 JACK was stopped successfully. 20:10:21.819 Post-shutdown script... 20:10:21.822 killall jackd jackd: no process found 20:10:22.230 Post-shutdown script terminated with exit status=256. 20:10:23.865 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Error: "/tmp/kde-asaf" is owned by uid 1000 instead of uid 0.

    Read the article

  • Web SMTP Server(foo.com) will not send mail to exchange server which is also(foo.com)

    - by Atom
    I think I understand this problem fully, but I do not know how to approach it or where to go in terms of troubleshooting. I've got my one domain http://foo.com that runs a Zen Cart installation that needs to be able to send emails to users(order confirmation, password reset). This works fine to send to any other domain BUT foo.com. I'm running a locally hosted exchange server that is foo.com, and we can send and receive email just fine. If I run tail -f /usr/local/psa/var/log/maillog I recieve this error: Apr 1 10:08:51 foo qmail-local-handlers[25824]: Handlers Filter before-local for qmail started ... Apr 1 10:08:51 foo qmail-local-handlers[25824]: from= Apr 1 10:08:51 foo qmail-local-handlers[25824]: [email protected] Apr 1 10:08:51 foo qmail-local-handlers[25824]: cannot reinject message to '[email protected]' Apr 1 10:08:51 foo qmail: 1270141731.583139 delivery 32410: failure: This_address_no_longer_accepts_mail./ Apr 1 10:08:51 foo qmail: 1270141731.584098 status: local 0/10 remote 0/20 I understand that the foo.com SMTP service doesn't have any account but the one that is used to authenticate mail being sent, so of course, I understand why it's saying 'this address no longer accepts mail'. My question is, how can I get the foo.com(web) SMTP service to handle emails meant for my exchange server([email protected]) or how do I handle the mail that needs to be sent to our exchange server? Is this something to do with MX records? Thanks in advance A

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >