Search Results

Search found 1575 results on 63 pages for 'lts'.

Page 59/63 | < Previous Page | 55 56 57 58 59 60 61 62 63  | Next Page >

  • Upgrade to Xubuntu 13.10 - Saucy Salamander

    As a common 'fashion' it is possible to upgrade an existing installation of Ubuntu or one of its derivates every six months. Of course, you might opt-in for the adventure and directly keep your system always on the latest version (including alphas and betas), or you might like to play safe and stay on the long-term support (LTS) versions which are updated every two years only. As for me, I'd like to jump from release to release on my main desktop machine. And since 17th October Saucy Salamander or also known as Ubuntu 13.10 has been released for general use. The following paragraphs document the steps I went in order to upgrade my system to the recent version. Don't worry about the fact that I'm actually using Xubuntu. It's mainly a flavoured version of Ubuntu running Xfce 4.10 as default X Window manager. Well, I have Gnome and LXDE on the same system... just out of couriosity. Preparing the system Before you think about upgrading you have to ensure that your current system is running on the latest packages. This can be done easily via a terminal like so: $ sudo apt-get update && sudo apt-get -y dist-upgrade --fix-missing Next, we are going to initiate the upgrade itself: $ sudo update-manager As a result the graphical Software Updater should inform you that a newer version of Ubuntu is available for installation. Ubuntu's Software Updater informs you whether an upgrade is available Running the upgrade After clicking 'Upgrade...' you will be presented with information about the new version. Details about Ubuntu 13.10 (Saucy Salamander) Simply continue with the procedure and your system will be analysed for the next steps. Analysing the existing system and preparing the actual upgrade to 13.10 Next, we are at the point of no return. Last confirmation dialog before having a coffee break while your machine is occupied to download the necessary packages. Not the best bandwidth at hand after all... yours might be faster. Are you really sure that you want to start the upgrade? Let's go and have fun! Anyway, bye bye Raring Ringtail and Welcome Saucy Salamander! In case that you added any additional repositories like Medibuntu or PPAs you will be informed that they are going to be disabled during the upgrade and they might require some manual intervention after completion. Ubuntu is playing safe and third party repositories are disabled during the upgrade Well, depending on your internet bandwidth this might take something between a couple of minutes and some hours to download all the packages and then trigger the actual installation process. In my case I left my PC unattended during the night. Time to reboot Finally, it's time to restart your system and see what's going to happen... In my case absolutely nothing unexpected. The system booted the new kernel 3.11.0 as usual and I was greeted by a new login screen. Honestly, 'same' system as before - which is good and I love that fact of consistency - and I can continue to work productively. And also Software Updater confirms that we just had a painless upgrade: System is running Ubuntu 13.10 - Saucy Salamander - and up to date See you in six months again... ;-) Post-scriptum In case that you would to upgrade to the latest development version of Ubuntu, run the following command in a console: $ sudo update-manager -d And repeat all steps as described above.

    Read the article

  • Issues with Rails, Amazon S3, and protected URLs

    - by Shpigford
    So I followed this little tutorial about protecting downloads of files that are uploaded to Amazon S3 with Paperclip. When I've developed locally, it's worked fine, but since pushing the exact same code to a production server...I now get this error from Amazon when I try to access the files: <Error> <Code>InvalidArgument</Code> <Message>Either the Signature query string parameter or the Authorization header should be specified, not both</Message> <ArgumentValue>Basic dGVjaHVrdWxlbGU6ZWxlbHVrdWhjZXQ=</ArgumentValue> <ArgumentName>Authorization</ArgumentName> <RequestId>F6E455857C54F95A</RequestId> <HostId>X4QA2pw9wpHtJtJ2T8qxCyINjq4PLHQVF4VrlYjpX7Ayh694BgQprh5p8H7NRCAt</HostId> </Error> Example URL: http://s3.amazonaws.com/media.example.com/assets/videos/1/original.mov?AWSAccessKeyId=MY_ACCESS_KEY&Expires=1271972624&Signature=7wWH2WYHPO0o9szwPJbimUMqAig%3D That URL is generated using AWS::S3::S3Object.url_for using the aws-s3 gem. So...not even sure where to start. The fact that it works fine when the app is running locally but not when in production really doesn't make sense. The production server is running Ubuntu 8.04.4 LTS (Hardy).

    Read the article

  • change image use javascript DOM

    - by user289346
    <html> <head> <script type="text/javascript"> var curimage = "cottage_small.jpg"; var curtext = "View large image"; function changeSrc() { if (curtext == "View large image"||curimage == "cottage_small.jpg") { document.getElementById("boldStuff").innerHTML = "View small image"; curtext="View small image"; document.getElementById("myImage")= "cottage_large.jpg"; curimage = "cottage_large.jpg"; } else { document.getElementById("boldStuff").innerHTML = "View large image"; curtext = "View large image"; document.getElementById("myImage")= "cottage_small.jpg"; curimage = "cottage_small.jpg"; } } </script> </head> <body> <!-- Your page here --> <h1> Pink Knoll Properties</h1> <h2> Single Family Homes</h2> <p> Cottage:<strong>$149,000</strong><br/> 2 bed, 1 bath, 1,189 square feet, 1.11 acres <br/><br/> <a href="#" onclick="changeSrc()"><b id="boldStuff" />View large image</a> </p> <p><img id="myImage" src="cottage_small.jpg" alt="Photo of a cottage" /></p> </body> </html> This is my coding I need to change the image and text the same time when I click it. I use LTS, it shows the line document.getElementById("myImage")= "cottage_large.jpg"; is a wrong number of arquments or invalid property assigment. Dose someone can help? Bianca

    Read the article

  • dig works but dig +trace <domain_name> not working

    - by anoopmathew
    In my local system i can't get the proper result of dig +trace , but dig works fine. I'm using Ubuntu 10.04 LTS version. I'll attach the result of dig and dig +trace along with this updates. dig +trace gmail.com ; << DiG 9.7.0-P1 << +trace gmail.com ;; global options: +cmd ;; Received 12 bytes from 4.2.2.4#53(4.2.2.4) in 291 ms dig gmail.com ; << DiG 9.7.0-P1 << gmail.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 59528 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;gmail.com. IN A ;; ANSWER SECTION: gmail.com. 49 IN A 74.125.236.118 gmail.com. 49 IN A 74.125.236.117 ;; Query time: 302 msec ;; SERVER: 4.2.2.4#53(4.2.2.4) ;; WHEN: Sat Oct 13 14:57:56 2012 ;; MSG SIZE rcvd: 59 Please anyone update a solution for this issue. I'm just worried about my issue.

    Read the article

  • Installing Ruby via rbenv fails

    - by Maximus S
    Problem: I installed ruby, but it is not recognized correctly. I'm following the deploying to VPS: https://github.com/railscasts/335-deploying-to-a-vps I am setting my server on ubuntu 12.04 LTS to deploy my rails app. I'm following the railscast on deploying to a VPS, and trying to install ruby through rbenv. It seemed everything was installed correctly, but when I tried to check the ruby version, it gave me errors. The following are the commands that I ran. deployer@max:~$ rbenv install 1.9.3-p125 Downloading yaml-0.1.4.tar.gz... -> http://cloud.github.com/downloads/sstephenson/ruby-build-download-mirror/36c852831d02cf90508c29852361d01b Installing yaml-0.1.4... Installed yaml-0.1.4 to /home/deployer/.rbenv/versions/1.9.3-p125 Downloading ruby-1.9.3-p125.tar.gz... -> http://cloud.github.com/downloads/sstephenson/ruby-build-download-mirror/e3ea86b9d3fc2d3ec867f66969ae3b92 Installing ruby-1.9.3-p125... Installed ruby-1.9.3-p125 to /home/deployer/.rbenv/versions/1.9.3-p125 Downloading rubygems-1.8.23.tar.gz... -> http://cloud.github.com/downloads/sstephenson/ruby-build-download-mirror/178b0ebae78dbb46963c51ad29bb6bd9 Installing rubygems-1.8.23... Installed rubygems-1.8.23 to /home/deployer/.rbenv/versions/1.9.3-p125 deployer@max:~$ rbenv global 1.9.3-p125 deployer@max:~$ ruby -v 'ruby' program can be found in the following packages: * ruby1.8 * ruby1.9.1 How do I solve this?

    Read the article

  • PHP5 getrusage() returning incorrect information?

    - by Andrew
    I'm trying to determine CPU usage of my PHP scripts. I just found this article which details how to find system and user CPU usage time (Section 4). However, when I tried out the examples, I received completely different results. The first example: sleep(3); $data = getrusage(); echo "User time: ". ($data['ru_utime.tv_sec'] + $data['ru_utime.tv_usec'] / 1000000); echo "System time: ". ($data['ru_stime.tv_sec'] + $data['ru_stime.tv_usec'] / 1000000); Results in: User time: 29.53 System time: 2.71 Example 2: for($i=0;$i<10000000;$i++) { } // Same echo statements Results: User time: 16.69 System time: 2.1 Example 3: $start = microtime(true); while(microtime(true) - $start < 3) { } // Same echo statements Results: User time: 34.94 System time: 3.14 Obviously, none of the information is correct except maybe the system time in the third example. So what am I doing wrong? I'd really like to be able to use this information, but it needs to be reliable. I'm using Ubuntu Server 8.04 LTS (32-bit) and this is the output of php -v: PHP 5.2.4-2ubuntu5.10 with Suhosin-Patch 0.9.6.2 (cli) (built: Jan 6 2010 22:01:14) Copyright (c) 1997-2007 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2007 Zend Technologies

    Read the article

  • Deploying a Rails app on an Ubuntu server using Git

    - by NudeCanalTroll
    I'm completely new to Linux, but today I find myself setting up a server (Ubuntu 10.04 LTS lucid) from scratch to host a Rails application. Anyway, I managed to get a Rails app up and running on the server itself, but I had to scrap that because I want to use Git. So I setup a git repository on the server, then pushed all the code from my local machine to the repository. Buuuut, of course Git doesn't actually store the files themselves in the repository -- all the code for my Rails app is now only on my local machine. How am I supposed to tell the server to host that? Right now my solution is to have the server use git to pull the code from its own repository. That's the code I'll host for all the world to see. In order to update the code, I guess I'll have to do something like this: Update the code on my local machine. Do some git adds, git commits, and a git push. On the server, do a git pull to update the code. So my question is, am I doing this the right way? enter code here

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • .ico icons not showing up on Windows

    - by Ali
    I followed the The Qt Resource System guide and the .ico icons appear on Linux. The icons are not showing up on Windows when I try to run the applicaton from Qt Creator. I suspect a plugin issue based on Qt/C++: Icons not showing up when program is run under windows O.S but I failed to figure out what to do from the guide How to Create Qt Plugins. Is it a plugin issue or why aren't the icons showing up on Windows? If it is a plugin issue: How do I tell my applicaton where to find the qico.dll? Details of the environment: Works on: Kubuntu 12.04 LTS, Qt Creator 2.4.1 and Qt 4.7.4 (64 bit) Fails on: Windows XP SP2 32 bit, Qt Creator 2.4.1 and Qt 4.7.4 (32 bit) Everyting is at its default (as installed out of the box), I did not mess with the settings. resources.qrc <!DOCTYPE RCC><RCC version="1.0"> <qresource> <file>images/spreadsheet.ico</file> </qresource> </RCC> Also tried with <qresource prefix="/">. From the applicaton.pro RESOURCES += \ resources.qrc OTHER_FILES += \ images/spreadsheet.ico In the corresponding source file QIcon(":/images/spreadsheet.ico") I repeat: it works on Linux.

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • Rails 2.3.14 setting expire_after for sessions is ignored

    - by Sergii Shablatovych
    I have next config in my environment.rb: config.action_controller.session_store = :cookie_store config.action_controller.session = { :expire_after => 14.days, :domain => DOMAIN, :session_key => '_session', :secret => 'some_string' } Setting session_store to active_record_store or mem_cache_store didn't help. Also i've tried just setting cookie from controller (with all founded options for expire): cookies[:test] = { :value => 'test' , :expires => 3600.to_i.from_now.utc } In both ways all sessions and cookies are deleted after closing browser window - they are only for browser session. I've tried almost all variants founded in the Internet - no luck( My config is: Ubuntu 10.04 LTS, rails 2.3.14, ruby Enterprise Edition 1.8.7, Phusion Passenger version 3.0.11 and Nginx compiled by Phusion Passenger. I've an options that it's Nginx not allowing setting some headers but also didn't find any solution. Any help appreciated! Thanks UPD. i've tried to put all configs for sessions to config/initializers/session_store.rb - nothing changed. i have a feeling that it's not a rails problem. may it be phusion + nginx error? i don't even know how to check where the problem is.

    Read the article

  • tomcat wont start up on linux machine

    - by David
    hi, im new to linux but after spending the day i have got linux running. installed java and tomcat. my goal is host my app with this linux box i've just set up. i know it all works fine from my windows based machine but it is my laptop so i'm planning this as my dedicated server. following many many forums i've now got tomcat 7 installed. however i cannot get it to start. changing to the tomcat directory and "./startup.sh" i get output Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local.tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: usr/lib/jvm/java-6-sun/ Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local.tomcat/bin/c\tomcat-juli.jar thats the end of theoutput. however localhost:8080 is not up. and in the tomcat log file is the error "eval: 1: usr/lib/jvm/java-6-sun//bin/java: not found" hopfully there is some expert here who can help me with this problem. please note that im a novice when it comes to linux. thankyou oh and my version of linux is Ubuntu 10.04 LTS - the Lucid Lynx

    Read the article

  • Getting Google repositories to work with apt-get on Ubuntu Hardy

    - by Justin
    I've installed Google Chrome on Hardy via the .deb file and would like to configure apt-get for automatic updates. [I have another machine running Ubuntu Karmic where this works fine; apt-get knows the package as 'google-chrome'; I'm now using a Dell Mini 10 with Ubuntu 8.04 LTS installed] As part of the .deb install, two entries have been added to the third- party software sources tab: http://dl.google.com/linux/deb stable main http://dl.google.com/linux/deb stable non-free main However if I check for updates with either of these clicked, I get the following error: Failed to fetch http://dl.google.com/linux/deb/dists/stable/Release Unable to find expected entry main/binary-lpia/Packages in Meta-index file (malformed Release file?) There is a thread here which indicates others have had the same problem: http://www.google.co.uk/support/forum/p/Chrome/thread?tid=097d103f87b49abe&hl=en This references a further thread: http://code.google.com/p/chromium/issues/detail?id=38608 which suggests the problem has been fixed. Despite this I remain unable to get it to work, and none of the suggested workarounds seem to work either. Ideas ? Thanks.

    Read the article

  • Which PHP MVC Framework should I use with MongoDB?

    - by Justin Jenkins
    I have a number of smaller scale PHP projects lined up and would like to take advangtage of a framework. I've read these questions ... What PHP framework would you choose for a new application and why? Picking a PHP MVC Framework I found them useful but I need more specific opinions; here are my major requirements ... MongoDB support (the more 'built-in' the better, a full ORM is not needed however.) MVC (and nice pretty urls ... if you will ... too!) Must work on Apache (2.2) on Ubuntu (10.04.1 LTS), but nginx is also a nice plus. PHP 5.3 or greater. Nice to haves ... I'd prefer more readable code than lots of "shortcut" shorthand coding (that just ends up confusing me later.) I've used PHP for a number of years, but don't use a lot of it's OO (nor do I really care to.) I really love jQuery, so a framework that "thinks" the same way would be nice. Lightweight, I don't need a ton of features ... I just need to make my life easier. I've briefly looked at Lithium, CakePHP, Vork and Symfony ... What would be the best framework for my needs? EDIT: Also docs are pretty important, documentaction with examples! I can't stand to waste time figuring how to use a framework if it would have taken less time to code it myself.

    Read the article

  • Problem with cyrillic symbols in console

    - by woto
    Hi everyone, sorry for bad English. It's Ruby code. s = "???????" `touch #{s}` `cat #{s}` `cat < #{s}` Can anybody tell why it's code fails? With sh: cannot open ???????: No such file But thic code works fine s = "????????" `touch #{s}` `cat #{s}` `cat < #{s}` Problem is only when Russian symbol '?' in the word and with symobol '<' woto@woto-work:/tmp$ locale LANG=ru_RU.UTF-8 LC_CTYPE="ru_RU.UTF-8" LC_NUMERIC="ru_RU.UTF-8" LC_TIME="ru_RU.UTF-8" LC_COLLATE="ru_RU.UTF-8" LC_MONETARY="ru_RU.UTF-8" LC_MESSAGES="ru_RU.UTF-8" LC_PAPER="ru_RU.UTF-8" LC_NAME="ru_RU.UTF-8" LC_ADDRESS="ru_RU.UTF-8" LC_TELEPHONE="ru_RU.UTF-8" LC_MEASUREMENT="ru_RU.UTF-8" LC_IDENTIFICATION="ru_RU.UTF-8" LC_ALL= woto@woto-work:/tmp$ ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] woto@woto-work:/tmp$ uname -a Linux woto-work 2.6.32-26-generic #48-Ubuntu SMP Wed Nov 24 10:14:11 UTC 2010 x86_64 GNU/Linux woto@woto-work:/tmp$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.1 LTS Release: 10.04 Codename: lucid

    Read the article

  • Lost in Nodester Installation

    - by jslamka
    I am trying to install my own version of Nodester. I have tried on Ubuntu 12.04 LTS and now with CentOS. I am not the most skilled Linux user (~2 months use) so I am at a loss at this point. The instructions are located at https://github.com/nodester/nodester/wiki/Install-nodester#wiki-a. They ask you to "export paths (to make npm work)" with the lines necessary to accomplish this. cd ~ echo -e "root = ~/.node_libraries\nmanroot = ~/local/share/man\nbinroot = ~/bin" > ~/.npmrc echo -e "export PATH=3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;\${PATH}:~/bin3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;;" >> ~/.bashrc source ~/.bashrc I can accomplish all of this until I get to the source ~/.bashrc line. When I run that, I get the following: [root@MYSERVER ~]# source ~/.bashrc -bash: /root/.bashrc: line 13: syntax error near unexpected token ';;' -bash: /root/.bashrc: line 13: 'export PATH=3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;${PATH}:~/bin3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;; I have tried changing the quot; to " and that didn't help. I tried changing quot; to colons and that didn't help. I also removed that and it didn't help (I am sure many of you at this point are probably wondering why I would even try those things). Does anyone have any insight as to what I need to do to get this to run properly?

    Read the article

  • Nginx Password Protect Directory Downloads Source Code

    - by Pamela
    I'm trying to password protect a WordPress login page on my Nginx server. When I navigate to http://www.example.com/wp-login.php, this brings up the "Authentication Required" prompt (not the WordPress login page) for a username and password. However, when I input the correct credentials, it downloads the PHP source code (wp-login.php) instead of showing the WordPress login page. Permission for my htpasswd file is set to 644. Here are the directives in question within the server block of my website's configuration file: location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } Alternately, here are the entire contents of my configuration file (including the above four lines): server { listen *:80; server_name domain.com www.domain.com; root /var/www/domain.com/web; index index.html index.htm index.php index.cgi index.pl index.xhtml; error_log /var/log/ispconfig/httpd/domain.com/error.log; access_log /var/log/ispconfig/httpd/domain.com/access.log combine$ location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats/ { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/web/stats/.htp$ } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files /b371b8bbf0b595046a2ef9ac5309a1c0.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web11.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } location / { try_files $uri $uri/ /index.php?$args; client_max_body_size 64M; } location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } } If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • ubuntu mail server settings and /etc/hosts file

    - by mbrc
    This is my /etc/hosts file 127.0.0.1 localhost.localdomain localhost 127.0.1.1 ubuntu-server.xx.com ubuntu-server 193.77.xx.xx mail.xx.com mail # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters is this correct configuration for my mail server. I am behind router so i don't know if is ok to use my IP for mail.xx.com and 127.0.0.1 for localhost problem is that i can receive mail but when i send it i get Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: SASL authentication failure: Password verification failed Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL PLAIN authentication failed: authentication failure Oct 17 21:29:34 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL LOGIN authentication failed: authentication failure EDIT: mabye is problem some port. i foward this ports. POP3 - port 110 IMAP - port 143 SMTP - port 25 HTTP - port 80 Secure SMTP (SSMTP) - port 465 Secure IMAP (IMAP4-SSL) - port 585 StartTLS - port 587 IMAP4 over SSL (IMAPS) - port 993 Secure POP3 (SSL-POP) - port 995 postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = amavis:[127.0.0.1]:10024 delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all inet_protocols = all mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d message_size_limit = 0 minimal_backoff_time = 1000s mydestination = myhostname = mail.xx.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = /etc/mailname readme_directory = no receive_override_options = no_address_mappings recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_data_restrictions = reject_unauth_pipelining smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/private/mail.xx.com.crt smtpd_tls_key_file = /etc/ssl/private/mail.xx.com.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/maps/alias.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/maps/domain.cf virtual_mailbox_limit = 0 virtual_mailbox_maps = mysql:/etc/postfix/maps/user.cf virtual_uid_maps = static:5000 saslfinger -c version: 1.0.4ostfix Cyrus sasl configuration Ä mode: client-side SMTP AUTH -- basics -- Postfix: 2.9.3 System: Ubuntu 12.04.1 LTS \n \l -- smtp is linked to -- libsasl2.so.2 => /usr/lib/i386-linux-gnu/libsasl2.so.2 (0x00d3a000) -- active SMTP AUTH and TLS parameters for smtp -- relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes -- listing of /usr/lib/sasl2 -- total 28 drwxr-xr-x 2 root root 4096 okt 14 15:18 . drwxr-xr-x 72 root root 12288 okt 14 15:03 .. -rw-r--r-- 1 root root 1 maj 4 06:17 berkeley_db.txt -rw-r----- 1 root root 701 okt 14 15:18 saslpasswd.conf -rw-r----- 1 smmta smmsp 885 okt 14 15:18 Sendmail.conf -- listing of /etc/postfix/sasl -- total 12 drwxr-xr-x 2 root root 4096 okt 11 18:55 . drwxr-xr-x 4 root root 4096 okt 12 06:59 .. -rwx------ 1 root root 241 okt 11 18:55 smtpd.conf Cannot find the smtp_sasl_password_maps parameter in main.cf. Client-side SMTP AUTH cannot work without this parameter!

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

  • Abnormal hangs and restarts Ubuntu 8.04

    - by jai-ho
    Hi, I am using Ubuntu 8.04 LTS and seeing the following behaviors: The system hangs after a while and becomes completely unresponsive. The system sometimes restarts itself ! Can you please help me identify what is the problem? Also please mention where should I look for the possible cause of this error. Thanks. EDIT: Got the following from the dmesg output (the system got hung and had to restart) [ 15.452015] Driver 'sr' needs updating - please use bus_type methods [ 15.456882] Driver 'sd' needs updating - please use bus_type methods [ 15.457987] sr0: scsi3-mmc drive: 52x/52x writer cd/rw xa/form2 cdda tray [ 15.457993] Uniform CD-ROM driver Revision: 3.20 [ 15.458058] sr 0:0:1:0: Attached scsi CD-ROM sr0 [ 15.463028] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463051] sd 1:0:0:0: [sda] Write Protect is off [ 15.463055] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463083] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463151] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463167] sd 1:0:0:0: [sda] Write Protect is off [ 15.463171] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463197] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463202] sda:<5sr 0:0:1:0: Attached scsi generic sg0 type 5 [ 15.464634] sd 1:0:0:0: Attached scsi generic sg1 type 0 [ 15.470120] sda1 sda2 < sda5 [ 15.495536] sd 1:0:0:0: [sda] Attached SCSI disk [ 15.759549] Attempting manual resume [ 15.759554] swsusp: Resume From Partition 8:5 [ 15.759556] PM: Checking swsusp image. [ 15.759742] PM: Resume from disk failed. [ 15.779964] EXT3-fs: INFO: recovery required on readonly filesystem. [ 15.779970] EXT3-fs: write access will be enabled during recovery. [ 19.904204] kjournald starting. Commit interval 5 seconds [ 19.904235] EXT3-fs: sda1: orphan cleanup on readonly fs [ 19.904245] ext3_orphan_cleanup: deleting unreferenced inode 303260 [ 19.904304] ext3_orphan_cleanup: deleting unreferenced inode 303329 [ 19.932763] ext3_orphan_cleanup: deleting unreferenced inode 3801871 [ 19.932785] ext3_orphan_cleanup: deleting unreferenced inode 3801874 [ 19.932798] ext3_orphan_cleanup: deleting unreferenced inode 3801910 [ 19.951253] ext3_orphan_cleanup: deleting unreferenced inode 3801912 [ 19.951266] ext3_orphan_cleanup: deleting unreferenced inode 3801914 [ 19.951278] ext3_orphan_cleanup: deleting unreferenced inode 3959212 [ 19.951299] ext3_orphan_cleanup: deleting unreferenced inode 3959213 [ 19.960335] ext3_orphan_cleanup: deleting unreferenced inode 3959215 [ 19.963531] ext3_orphan_cleanup: deleting unreferenced inode 3801875 [ 19.963545] ext3_orphan_cleanup: deleting unreferenced inode 3663727 [ 19.963565] ext3_orphan_cleanup: deleting unreferenced inode 3663708 [ 19.963577] ext3_orphan_cleanup: deleting unreferenced inode 4072122 [ 19.963597] ext3_orphan_cleanup: deleting unreferenced inode 4072157 [ 19.968616] ext3_orphan_cleanup: deleting unreferenced inode 4072159 [ 19.970252] ext3_orphan_cleanup: deleting unreferenced inode 4072160 [ 19.970264] ext3_orphan_cleanup: deleting unreferenced inode 4072161 [ 19.992889] ext3_orphan_cleanup: deleting unreferenced inode 4072264 [ 19.992903] ext3_orphan_cleanup: deleting unreferenced inode 4072267 [ 19.999585] ext3_orphan_cleanup: deleting unreferenced inode 4072268 [ 20.008329] ext3_orphan_cleanup: deleting unreferenced inode 4072270 [ 20.008343] ext3_orphan_cleanup: deleting unreferenced inode 4072123 [ 20.008360] ext3_orphan_cleanup: deleting unreferenced inode 4072452 [ 20.008374] ext3_orphan_cleanup: deleting unreferenced inode 4072453 [ 20.008385] ext3_orphan_cleanup: deleting unreferenced inode 4072124 [ 20.008398] ext3_orphan_cleanup: deleting unreferenced inode 311574 [ 20.008413] ext3_orphan_cleanup: deleting unreferenced inode 967890 [ 20.008420] EXT3-fs: sda1: 28 orphan inodes deleted [ 20.008423] EXT3-fs: recovery complete. [ 20.082622] EXT3-fs: mounted filesystem with ordered data mode. [ 29.025379] input: PC Speaker as /devices/platform/pcspkr/input/input2 [ 29.187133] Linux agpgart interface v0.102 [ 29.225338] iTCO_vendor_support: vendor-support=0 [ 29.259662] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.02 (26-Jul-2007)

    Read the article

  • IPvsadm not equally balancing on wlc scheduler

    - by davidsmalley
    For some reason, ipvsadm does not seem to be equally balancing the connections between my real servers when using the wlc or lc schedulers. One real server gets absolutely hammered with requests while the others receive relatively few connections. My ldirectord.cf file looks like this: quiescent = yes autoreload = yes checktimeout = 10 checkinterval = 10 # *.site.com http virtual = 111.111.111.111:http real = 10.10.10.1:http ipip 10 real = 10.10.10.2:http ipip 10 real = 10.10.10.3:http ipip 10 real = 10.10.10.4:http ipip 10 real = 10.10.10.5:http ipip 10 scheduler = lc protocol = tcp service = http checktype = negotiate request = "/lb" receive = "Up and running" virtualhost = "site.com" fallback = 127.0.0.1:http The weird thing that I think may be causing the problem (but I'm really not sure) is that ipvsadm doesn't seem to be tracking active connections properly, they all appear as inactive connections IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 111.111.111.111:http lc -> 10.10.10.1:http Tunnel 10 0 10 -> 10.10.10.2:http Tunnel 10 0 18 -> 10.10.10.3:http Tunnel 10 0 3 -> 10.10.10.4:http Tunnel 10 0 10 -> 10.10.10.5:http Tunnel 10 0 5 If I do ipvsadm -Lnc then I see lots of connections but only ever in ESTABLISHED & FIN_WAIT states. I was using ldirectord previously on a Gentoo based load balancer and the activeconn used to be accurate, since moving to Ubuntu 10.4 LTS something seems to be different. # ipvsadm -v ipvsadm v1.25 2008/5/15 (compiled with popt and IPVS v1.2.1) So, is ipvsadm not tracking active connections properly and thus making load balancing work incorrectly and if so, how do I get it to work properly again? Edit: It gets weirder, if I cat /proc/net/ip_vs then it looks like the correct activeconns are there IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP B86A9732:0050 rr -> 0AB42453:0050 Tunnel 10 1 24 -> 0AB4321D:0050 Tunnel 10 0 23 -> 0AB426B2:0050 Tunnel 10 2 25 -> 0AB4244C:0050 Tunnel 10 2 22 -> 0AB42024:0050 Tunnel 10 2 23

    Read the article

  • OpenSSH does not accept public key?

    - by Bob
    I've been trying to solve this for a while, but I'm admittedly quite stumped. I just started up a new server and was setting up OpenSSH to use key-based SSH logins, but I've run into quite a dilemma. All the guides are relatively similar, and I was following them closely (despite having done this once before). I triple checked my work to see if I would notice some obvious screw up - but nothing is apparent. As far as I can tell, I haven't done anything wrong (and I've checked very closely). If it's any help, on my end I'm using Cygwin and the server is running Ubuntu 12.04.1 LTS. Anyways, here is the output (I've removed/censored some parts for privacy (primarily anything with my name, website, or its IP address), but I can assure you that nothing is wrong there): $ ssh user@host -v OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Connecting to host [ipaddress] port 22. debug1: Connection established. debug1: identity file /home/user/.ssh/id_rsa type 1 debug1: identity file /home/user/.ssh/id_rsa-cert type -1 debug1: identity file /home/user/.ssh/id_dsa type -1 debug1: identity file /home/user/.ssh/id_dsa-cert type -1 debug1: identity file /home/user/.ssh/id_ecdsa type -1 debug1: identity file /home/user/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 24:68:c3:d8:13:f8:61:94:f2:95:34:d1:e2:6d:e7:d7 debug1: Host 'host' is known and matches the ECDSA host key. debug1: Found key in /home/user/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/user/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/user/.ssh/id_dsa debug1: Trying private key: /home/user/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). What can I do to resolve my problem?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63  | Next Page >