Search Results

Search found 30329 results on 1214 pages for 'ubuntu forums'.

Page 366/1214 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Apache mod-pagespeed installation affects mod-spdy?

    - by tim peterson
    Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only things to my knowledge that I've changed is that I recently installed mod_spdy and then a few weeks later I installed mod_pagespeed, https://developers.google.com/speed/pagespeed/mod. However, I have since turned mod_pagespeed off by setting its pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. The line below is how every of last 10 lines of my error.log look: # tail -f /var/log/apache2/error.log ... [32728:32729:ERROR:mod_spdy.cc(162)] request->chunked == 1 in request GET / HTTP/1.1 [Sat Jun 02 04:50:08 2012] [warn] [client 50.136.93.153] [stream 5] [32728:32729:WARNING:http_to_spdy_filter.cc(113)] HttpToSpdyFilter is not the last filter in the chain: chunk any thoughts? thank you, tim

    Read the article

  • Apache Reverse Proxy not working inside a VirtualHost running a Mono Web Application

    - by Arwen
    I have a mono web application running with this virtual host below. It is running on Apache 2.2.20 / Ubuntu 11.10. I tried to add a reverse proxy inside this virtualhost so I can make asynchronous or AJAX type calls back to this same domain. My asynchronous requests would have problems in many browsers calling services that are on another domain (cross domain requests problem). I am wanting to do reverse proxy calls to this other service using http://www.whatever.com/monkey/. So, I added the directive and top directive to try to make this work. It is weird though...nothing I do seems to have any effect. I can put the exact same markup in my default website virtualhost file and it works great. What is the deal? Are some of these Mono directives causing problems? <VirtualHost *:80> ServerName www.whatever.com ServerAlias whatever.com *.whatever.com ServerAdmin [email protected] DocumentRoot /home/myuser/web/whatever ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> <Location /monkey/> ProxyPass http://www.google.com/ ProxyPassReverse http://www.google.com/ </Location> MonoServerPath www.whatever.com "/usr/bin/mod-mono-server2" MonoSetEnv www.whatever.com MONO_IOMAP=all MonoApplications www.whatever.com "/:/home/myuser/web/whatever" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias www.whatever.com SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost>

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Solr error; JNDI not configured for solr; Anybody know what this means?

    - by Camran
    I am installing solr on my VPS (Ubuntu 9.10) via PuTTY. First, I thought about installing Solr with Tomcat, but then after installing tomcat, I changed my mind and went for the Jetty which comes with Solr. Now that I have setup everything on my Server, and try to start the "start.jar" file, I get some errors... Here is some text from the log file: 2010-05-29 00:22:42.074::INFO: jetty-6.1.3 2010-05-29 00:22:42.134::INFO: Extract jar:file:/var/www/webapps/solr.war!/ to /var/www/work/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' May 29, 2010 12:22:42 AM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.CoreContainer$Initializer initialize INFO: looking for solr.xml: /var/www/solr/solr.xml May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' Anybody know what this is? Thanks

    Read the article

  • Page reload needed several times before loading normally

    - by tim peterson
    Sorry my question is so vague I just have no idea where to start in solving it and am quite a novice with servers. Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only thing to my knowledge that I changed is that I installed mod_pagespeed https://developers.google.com/speed/pagespeed/mod. However, I have since turned it off by setting my pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. I'm wondering general advice on how to troubleshoot this problem. For instance are there linux commands to check page loading peformance? Also, it looks like I have lots of new error.logs in my /var/log/apache2 directory which i believe weren't there a few months ago. lots of this : error.log RewriteLog.log.24.gz ssl_access.log.40.gz error.log.1 RewriteLog.log.25.gz ssl_access.log.41.gz error.log.10.gz RewriteLog.log.26.gz ssl_access.log.42.gz error.log.11.gz RewriteLog.log.27.gz any thoughts? thank you, tim

    Read the article

  • VSFTP Users and Directories

    - by Mathew
    I'm stuck. I've been working all day on trying to figure out what I'm doing wrong and I've hit wall after wall. What I'm trying to do: Setup FTP in such a way that certain users have access only to their directory, but higher level users have access to all directories. What I've Googled so far: I started with this, but that didn't do what I needed it to. I then used this, but once I created one user, it wouldn't let me create another one. Finally, I decided to follow this, but it wouldn't let me even create one user. I'm using Ubuntu 10. I can login to ftp as a root user and it takes me to the home directory. If I try to login using the user I created in the tutorial it says: Status: Connection established, waiting for welcome message... Response: 220 (vsFTPd 2.2.2) Command: USER mathew Response: 331 Please specify the password. Command: PASS **** Response: 530 Login incorrect. Error: Critical error Error: Could not connect to server

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • saslauthd authentication error

    - by James
    My server has developed an expected problem where I am unable to connect from a mail client. I've looked at the server logs and the only thing that looks to identify a problem are events like the following: Nov 23 18:32:43 hig3 dovecot: imap-login: Login: user=, method=PLAIN, rip=xxxxxxxx, lip=xxxxxxx, TLS Nov 23 18:32:55 hig3 postfix/smtpd[11653]: connect from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: xxxxxxx.co.uk[xxxxxxxx]: SASL LOGIN authentication failed: generic failure Nov 23 18:32:56 hig3 postfix/smtpd[11653]: lost connection after AUTH from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:56 hig3 postfix/smtpd[11653]: disconnect from xxxxxxx.co.uk[xxxxxxx] The problem is unusual, because just half an hour previously at my office, I was not being prompted for a correct username and password in my mail client. I haven't made any changes to the server, so I can't understand what would have happened to make this error occur. Searches for the error messages yield various results, with 'fixes' that I'm uncertain of (obviously don't want to make it worse or fix something that isn't broken). When I run testsaslauthd -u xxxxx -p xxxxxx I also get the following result: connect() : No such file or directory But when I run testsaslauthd -u xxxxx -p xxxxxx -f /var/spool/postfix/var/run/saslauthd/mux -s smtp I get: 0: OK "Success." I found those commands on another forum and am not entirely sure what they mean, but I'm hoping they might give an indication of where the problem might lie. If it makes any difference, I'm running Ubuntu 10.04.1, Postfix 2.7.0 and Webmin/ Virtualmin.

    Read the article

  • Kate gives out debug messages on the console from which it is started

    - by Elan
    I am new to Linux. I use Ubuntu 11.04. Whenever I open a file with kate from the commandline, with 'kate &' (or without ampersand), Kate starts out giving messages on the console. It continuously gives them out as I save a file or close one. They look like debug messages to me (sample below). I have used Synaptic package manager to install Kate. Uninstalling and installing the dev version did not make any change. Soon my console becomes cluttered. Is there a way to suppress these messages? There was nothing explicit in Kate settings either. Thank you, The messages look like kate(13412)/kate-filetree KateFileTreeModel::handleInsert: BEGIN! kate(13412)/kate-filetree KateFileTreeModel::handleInsert: creating a new root kate(13412)/kate-filetree ProxyItem::ProxyItem: ProxyItem(0x1796840,0x0,-1,QObject(0x0) .... kate(13435)/kate-filetree KateFileTreeModel::documentActivated: adding viewHistory ProxyItem(0x1eb7cf0,0x1eb6830,0,KateDocument(0x1d93ea0) , "Untitled" ) kate(13435)/kate-filetree KateFileTreeModel::updateBackgrounds: BEGIN! kate(13435)/kate-filetree KateFileTreeModel::updateBackgrounds: END! kate(13435)/kate-filetree KateFileTreeModel::documentActivated: END! kate(13435)/kate-filetree KateFileTreePluginView::viewChanged: END! X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x5601b42 X Error: BadWindow (invalid Window parameter) 3 Major opcode: 20 (X_GetProperty) Resource id: 0x5601b42

    Read the article

  • nginx does not use variables set in /etc/environment on system reboot, but does when restarted from shell

    - by Dave Nolan
    I have a Rails app running on nginx/passenger. It restarts happily in a shell using sudo /etc/init.d/nginx stop|start|restart. But Passenger throws an error when the system is rebooted: "Missing the Rails #{version} gem". But GEM_HOME and GEM_PATH are both set in /etc/environment so surely they would be available to all processes during reboot? /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/games" GEM_HOME=/var/lib/gems/1.8 GEM_PATH=/var/lib/gems/1.8 /etc/init.d/nginx #! /bin/sh ### BEGIN INIT INFO # Provides: nginx # Required-Start: $all # Required-Stop: $all # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the nginx web server # Description: starts nginx using start-stop-daemon ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/opt/nginx/sbin/nginx NAME=nginx DESC=nginx test -x $DAEMON || exit 0 # Include nginx defaults if available if [ -f /etc/default/nginx ] ; then . /etc/default/nginx fi set -e case "$1" in start) echo -n "Starting $DESC: " start-stop-daemon --start --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; restart|force-reload) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON sleep 1 start-stop-daemon --start --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; reload) echo -n "Reloading $DESC configuration: " start-stop-daemon --stop --signal HUP --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart|force-reload}" >&2 exit 1 ;; esac exit 0 $ opt/nginx/sbin/nginx -v nginx version: nginx/0.7.67 Ubuntu lucid

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by plua
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • What's hogging my CPU?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. I try killing lots of things, but it continues at 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, that normally only hits 100% CPU when I'm doing something that requires it (like loading 32 Firefox tabs) after which it goes back to a normal idle level. It's not a new install or anything. It shouldn't be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing any process I saw active again, and killing vino-server finally fixed the problem, even though it never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). How did it manage to use 100% CPU while top only showed it as 5% or so? How do I identify the culprit in the future? Looks like I'm not the only one: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Mounting NAS drive with cifs using credentials file through fstab does not work

    - by mahatmanich
    I can mount the drive in the following way, no problem there: mount -t cifs //nas/home /mnt/nas -o username=username,password=pass\!word,uid=1000,gid=100,rw,suid However if I try to mount it via fstab I get the following error: //nas/home /mnt/nas cifs iocharset=utf8,credentials=/home/username/.smbcredentials,uid=1000,gid=100 0 0 auto .smbcredentials file looks like this: username=username password=pass\!word Note the ! in my password ... which I am escaping in both instances I also made sure there are no eol in the file using :set noeol binary from Mount CIFS Credentials File has Special Character chmod on .credentials file is 0600 and chown is root:root file is under ~/ Why am I getting in on the one side and not with fstab?? I am running on ubuntu 12 LTE and mount.cifs -V gives me mount.cifs version: 5.1 Any help and suggestions would be appreciated ... UPDATE: /var/log/syslog shows following [26630.509396] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE [26630.509407] CIFS VFS: Send error in SessSetup = -13 [26630.509528] CIFS VFS: cifs_mount failed w/return code = -13 UPDATE no 2 Debugging with strace mount through fstab: strace -f -e trace=mount mount -a Process 4984 attached Process 4983 suspended Process 4985 attached Process 4984 suspended Process 4984 resumed Process 4985 detached [pid 4984] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4984] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = -1 EACCES (Permission denied) mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Process 4983 resumed Process 4984 detached Mount through terminal strace -f -e trace=mount mount -t cifs //nas/home /mnt/nas -o username=user,password=pass\!wd,uid=1000,gid=100,rw,suid Process 4990 attached Process 4989 suspended Process 4991 attached Process 4990 suspended Process 4990 resumed Process 4991 detached [pid 4990] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4990] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = 0 Process 4989 resumed Process 4990 detached

    Read the article

  • How can I automatically restart Apache and Varnish if can't fetch a file?

    - by Tyler
    I need to restart Apache and Varnish and email some logs when the script can't fetch robots.txt but I am getting an error ./healthcheck: 43 [[: not found My server is Ubuntu 12.04 64-bit #!/bin/sh # Check if can fetch robots.txt if not then restart Apache and Varnish # Send last few lines of logs with date via email PATH=/bin:/usr/bin THEDIR=/tmp/web-server-health [email protected] mkdir -p $THEDIR if ( wget --timeout=30 -q -P $THEDIR http://website.com/robots.txt ) then # we are up touch ~/.apache-was-up else # down! but if it was down already, don't keep spamming if [[ -f ~/.apache-was-up ]] then # write a nice e-mail echo -n "Web server down at " > $THEDIR/mail date >> $THEDIR/mail echo >> $THEDIR/mail echo "Apache Log:" >> $THEDIR/mail tail -n 30 /var/log/apache2/error.log >> $THEDIR/mail echo >> $THEDIR/mail echo "AUTH Log:" >> $THEDIR/mail tail -n 30 /var/log/auth.log >> $THEDIR/mail echo >> $THEDIR/mail # kick apache echo "Now kicking apache..." >> $THEDIR/mail /etc/init.d/varnish stop >> $THEDIR/mail 2>&1 killall -9 varnishd >> $THEDIR/mail 2>&1 /etc/init.d/varnish start >> $THEDIR/mail 2>&1 /etc/init.d/apache2 stop >> $THEDIR/mail 2>&1 killall -9 apache2 >> $THEDIR/mail 2>&1 /etc/init.d/apache2 start >> $THEDIR/mail 2>&1 # prepare the mail echo >> $THEDIR/mail echo "Good luck troubleshooting!" >> $THEDIR/mail # send the mail sendemail -o message-content-type=html -f [email protected] -t $EMAIL -u ALARM -m < $THEDIR/mail rm ~/.apache-was-up fi fi rm -rf $THEDIR

    Read the article

  • Acer Aspire One getting extremely hot

    - by ascom
    I have an Acer Aspire One D250-1197. I really better type fast before it overheats again... For some reason, I'm having a problem with heat on my netbook only when I run Joli OS (Ubuntu 9.10 LTS?). When I leave it idle, with nothing running (other than the regular Joli OS desktop and a couple of doing-nothing terminals), heat slowly builds up to the point where the netbook is burning hot to the touch. I have never had this problem when running Windows 7 Starter (even though it gives me plenty of other headaches). It seems that the fan is spinning, but not fast enough to keep up with the heat buildup. Is there something wrong with the fan drivers? The computer doesn't seem to recognize that it is overheating. What can I do to solve this problem (other than shut it off or use Windows)? I'm currently on the wrong side of Earth (I mean, on vacation), so I just need a temporary fix, such as a driver I can install. Also, I have to use Linux, because I have to share out the wired connection in hotels wirelessly to the iPhones. EDIT: I'm switching from Joli OS to a more "proper" and up to date distribution (Xubuntu 13.04). I'll see if it still has the heat problem and try @nod's cpufreq idea.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Deploying a Git server in a AWS linux instance

    - by Leroux
    I'm making a git server on my linux instance in AWS. I tried doing it using these instructions but in the end I always get stuck with a "Permission denied (publickey)" message. So here is my detailed steps, the client is my windows machine running mysysgit and the server is the AWS ubuntu instance : 1) I created user Git with a simple password. 2) Created the ssh directory in ~/.ssh 3) On the client I created ssh keys using ssh-keygen -t rsa -b 1024, they got dropped in my /Users/[Name]/.ssh directory, id_rsa and id_rsa.pub key pair was created. 4) Using notepad I copy pasted the text into newly created files on the server in the ~/.ssh directory of my Git user. ~/.ssh/id_rsa and **~/.ssh/id_rsa.pub** were copied. 5) On the server I made the authorized_hosts file using "cat id_rsa.pub authorized_hosts" (while inside the .ssh directory) 6) Now to test it, on my client machine I did ssh -v git@[ip.address] 7) Result : debug1: Host 'ip.address' is known and matches the RSA host key. debug1: Found key in /c/Users/[Name]/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/[Name]/.ssh/identity debug1: Trying private key: /c/Users/[Name]/.ssh/id_rsa debug1: Offering public key: /c/Users/[Name]/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I would appreciate any insight anyone can give me.

    Read the article

  • configuring vsftpd anonymous upload. Creates files but freezes at 0 bytes

    - by Wayne
    vsftpd on ubuntu after sudo apt-get install vsftpd Then did configuration as in the attached /etc/vsftpd.conf file. Anonymous ftp allows cd to the upload directly and allows put myfile.txt which gets created on the server but then the client hangs and never proceeds. The file on the server remains at 0 bytes. Here's the folders and permissions: root@support:/home/ftp# ls -ld . drwxr-xr-x 3 root root 4096 Jun 22 00:00 . root@support:/home/ftp# ls -ld pub drwxr-xr-x 3 root root 4096 Jun 21 23:59 pub root@support:/home/ftp# ls -ld pub/upload drwxr-xr-x 2 ftp ftp 4096 Jun 22 00:06 pub/upload root@support:/home/ftp# Here's the vsftpd.conf file: root@support:/home/ftp# grep -v '#' /etc/vsftpd.conf listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES dirmessage_enable=YES xferlog_enable=YES anon_root=/home/ftp/pub/ connect_from_port_20=YES chown_uploads=YES chown_username=ftp nopriv_user=ftp secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Here's a file example that attempted to upload: root@support:/home/ftp/pub/upload# ls -l total 0 -rw------- 1 ftp nogroup 0 Jun 22 00:06 build.out This is the client attempting to upload...it is frozen at this point: $ ftp 173.203.89.78 Connected to 173.203.89.78. 220 (vsFTPd 2.0.6) User (173.203.89.78:(none)): ftp 331 Please specify the password. Password: 230 Login successful. ftp> put build.out 200 PORT command successful. Consider using PASV. 553 Could not create file. ftp> cd upload 250 Directory successfully changed. ftp> put build.out 200 PORT command successful. Consider using PASV. 150 Ok to send data.

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • How to add another application to apache?

    - by Jader Dias
    I was following the Zabbix installation tutorial for Ubuntu and it requested that I added a file /etc/apache2/sites-enabled/000-default containing Alias /zabbix /home/zabbix/public_html/ <Directory /home/zabbix/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS PROPFIND> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS PROPFIND> Order deny,allow Deny from all </LimitExcept> </Directory> But I already have /etc/apache2/sites-enabled/railsapp NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> UseCanonicalName Off Include /etc/apache2/conf/railsapp.conf </VirtualHost> <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/ssl/certs/cert.pem Include /etc/apache2/conf/railsapp.conf RequestHeader set X_FORWARDED_PROTO 'https' </VirtualHost> and /etc/apache2/sites-enabled/mercurial NameVirtualHost *:8080 <VirtualHost *:8080> UseCanonicalName Off ServerAdmin webmaster@localhost AddHandler cgi-script .cgi ScriptAliasMatch ^(.*) /usr/lib/cgi-bin/hgwebdir.cgi/$1 </VirtualHost> I think that it is because of the already existing virtual hosts that my I can't access the zabbix page. How to circumvent this?

    Read the article

  • Unknown Apache2 + PHP5 FastCGI 500 error .. caused by search engine bots?

    - by rdjurovich
    My Ubuntu server is configured with Apache 2.2.8 and PHP 5.2.4-2ubuntu5.18 in FastCGI mode. Everything works well, except I am seeing 500 errors that only seem to come from bots accessing the server.. for example (access.log): x.125.71.104 - - [16/Nov/2011:10:27:39 +1100] "GET / HTTP/1.1" 500 41377 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" x.40.103.239 - - [16/Nov/2011:11:05:56 +1100] "GET / HTTP/1.0" 500 14717 "-" "Mozilla/5.0 (compatible; mon.itor.us - free monitoring service; http://mon.itor.us)" x.249.67.114 - - [14/Nov/2011:20:57:17 +1100] "GET / HTTP/1.1" 500 101 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" x.55.39.85 - - [14/Nov/2011:19:31:06 +1100] "GET / HTTP/1.1" 500 7032 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)._" It is my understanding that a 500 error will be thrown when the PHP process fails to respond to Apache, which could be caused by a fatal PHP error or if PHP runs out of processes.. so my assumption is that either the bots are hitting the server too hard, killing the PHP processes, or something in the request header from bots is causing a fatal error in my PHP script? If anyone can offer advice on this it would be greatly appreciated! Ryan

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >