Search Results

Search found 35296 results on 1412 pages for '12 10'.

Page 71/1412 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • runit - unable to open supervise/ok: file does not exist

    - by Alexandr Kurilin
    I'm trying to figure out why runit will not boot or give me the status for the managed applications. Running on Ubuntu 12.04. I created /service, /etc/sv/myapp (with a run script, a config file, a log folder and a run script inside of it). I create a symlink from /service/ to /etc/sv/myapp When I run sudo sv s /service/* I get the following error message: warning: /service/myapp: unable to open supervice/ok: file does not exist Some of my Googling revealed that supposedly rebooting the svscan service might fix this, but killing it and running svscanboot didn't make a difference. Any suggestions? Am I missing a step here somewhere?

    Read the article

  • With Ubuntu Linux (10+), how do I connect to remote to my machine from Windows

    - by Berlin Brown
    I tried to to remote into my Ubuntu machine. I enabled the setting on Ubuntu and that side seems to work. But I get a connection time out when I use RealVNC on the Windows box. I believe it is a firewall issue. I disabled the firewall for that application on Windows but I don't know how to check if the firewall is enabled on the windows machine. I am on a local network with a router. Ideally, I would want to block that remote control port at the Internet level/router level but "enable" that port on the Windows box and the Ubuntu box. How do I check those settings.

    Read the article

  • Failed upgrade of PHP on Ubunutu 12.04, error: Sub-process /usr/bin/dpkg returned an error code (1)

    - by DanielAttard
    I just tried to upgrade my version of PHP on Ubuntu 12.04 and now I have messed it up. First I did this: sudo add-apt-repository ppa:ondrej/php5-oldstable Then I did this: sudo apt-get update Then finally I did this: sudo apt-get install php5 And now I am getting an error message about Sub-process /usr/bin/dpkg returned an error code (1) What have I done wrong? How can I fix this problem? Thanks. Here are the errors received: Do you want to continue [Y/n]? Y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up libapache2-mod-php5 (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libapache2-mod-php5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Setting up php5-cli (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing php5-cli (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-curl: php5-curl depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-curl (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-gd: php5-gd depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-gd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mcrypt: php5-mcrypt depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mcrypt (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mysql: php5-mysql depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mysql (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5: php5 depends on libapache2-mod-php5 (>= 5.4.28-1+deb.sury.org~precise+1) | libapache2-mod-php5filter (>= 5.4.28-1+deb.sury.org~precise+1) | php5-cgi (>= 5.4.28-1+deb.sury.org~precise+1) | php5-fpm (>= 5.4.28-1+deb.sury.org~precise+1); however: Package libapache2-mod-php5 is not configured yet. Package libapache2-mod-php5filter is not installed. Package php5-cgi is not installed. Package php5-fpm is not installed. dpkg: error processing php5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: libapache2-mod-php5 php5-cli php5-curl php5-gd php5-mcrypt php5-mysql php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Wireless networking on Gnome on Ubuntu 9 / 10

    - by WaveyDavey
    So here's my problem: I have some netbooks (ASUS eee, and ACER Aspire Ones) that I've been tasked to set up as kiosk machines, locked up tight for normal users. I am a command-line, server man, so this gnome malarkey is all a bit new to me. I found a lovely 9.04 kiosk livecd that installs and runs exactly as I want it to, but I can't get the wireless working. So I dropped on a full 10.4 distro, and wireless works straight out of the box (so hardware is good) - all I needed to do was right-click on the network connection icon, enter my SSID and password (WPA/WPA2) and away it went, perfect. Further investigation on 10.4 distro shows that /etc/networking/interfaces is virtually empty (just auto lo iface lo inet loopback in it), even after I have set up the wireless thru the gnome taskbar applet (is that the right word?). So where does gnome / ubuntu store the network settings to bring the blasted wireless connection up, and what do I need to do on the kiosk version to get wireless running?

    Read the article

  • Ubuntu 12.04 LDAP SSL self-signed cert not accepted

    - by MaddHacker
    I'm working with Ubuntu 12.04, using OpenLDAP server. I've followed the instructions on the Ubuntu help pages and can happily connect without security. To test my connection, I'm using ldapsearch the command looks like: ldapsearch -xv -H ldap://ldap.[my host].local -b dc=[my domain],dc=local -d8 -ZZ I've also used: ldapsearch -xv -H ldaps://ldap.[my host].local -b dc=[my domain],dc=local -d8 As far as I can tell, I've setup my certificate correctly, but no matter why I try, I can't seem to get ldapsearch to accept my self-signed certificate. So far, I've tried: Updating my /etc/ldap/ldap.conf file to look like: BASE dc=[my domain],dc=local URI ldaps://ldap.[my host].local TLS_CACERT /etc/ssl/certs/cacert.crt TLS_REQCERT allow Updating my /etc/ldap.conf file to look like: base dc=[my domain],dc=local uri ldapi:///ldap.[my host].local uri ldaps:///ldap.[my host].local ldap_version 3 ssl start_tls ssl on tls_checkpeer no TLS_REQCERT allow Updating my /etc/default/slapd to include: SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" Several hours of Googling, most of which resulted in adding the TLS_REQCERT allow The exact error I'm seeing is: ldap_initialize( ldap://ldap.[my host].local ) request done: ld 0x20038710 msgid 1 TLS certificate verification: Error, self signed certificate in certificate chain TLS: can't connect. ldap_start_tls: Connect error (-11) additional info: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed After several hours of this, I was hoping someone else has seen this issue, and/or knows how to fix it. Please do let me know if I should add more information, or if you need further data.

    Read the article

  • Can’t start MySQL on Ubuntu 12.04 after restored from innobackupex

    - by RAH
    I can’t start MySQL on Ubuntu 12.04 after restored backup from innobackupex. Before I tried to restore the db from backup I moved the datadir and got the same problem. With help from google I fixed the problem and got MySQL started. Ready to set up my new slave, I restored the backup via innobackupex –-copy-path /db/mysql, and now I can’t start MySQL. I am sure of the following: In my.cnf the datadir = /db/mysql The new datadir is chown mysql:mysql. The /etc/apparmor.d/usr.sbin.mysqld contains: #/var/lib/mysql/ r, #/var/lib/mysql/** rwk, /db/mysql r, /db/mysql** rwk, AND /var/run/mysqld/mysqld.pid w, /var/run/mysqld/mysqld.sock w, /run/mysqld/mysqld.pid w, /run/mysqld/mysqld.sock w, /var/log/syslog gives me the following info: http://pastebin.com/1TQGsaBH What am I missing? Thanks.

    Read the article

  • 'skb rides the rocket' on Xen VM

    - by Kye
    I've just set up Ubuntu 13.10 server as a VM on my Ubuntu/Xen server, and I'm getting these weird lines in my syslog. Nov 12 10:26:32 human kernel: [130782.315333] xennet: skb rides the rocket: 19 slots Nov 12 10:26:32 human kernel: [130782.362405] xennet: skb rides the rocket: 20 slots Nov 12 10:26:32 human kernel: [130782.408458] xennet: skb rides the rocket: 19 slots Nov 12 10:26:32 human kernel: [130782.490260] xennet: skb rides the rocket: 20 slots Nov 12 10:26:32 human kernel: [130782.541931] xennet: skb rides the rocket: 19 slots Nov 12 10:26:35 human kernel: [130785.226635] xennet: skb rides the rocket: 19 slots Nov 12 10:26:35 human kernel: [130785.261026] xennet: skb rides the rocket: 21 slots Nov 12 10:26:35 human kernel: [130785.469306] xennet: skb rides the rocket: 19 slots Nov 12 10:26:36 human kernel: [130786.552730] xennet: skb rides the rocket: 21 slots Nov 12 10:26:38 human kernel: [130788.212747] xennet: skb rides the rocket: 20 slots Nov 12 10:26:38 human kernel: [130788.257544] xennet: skb rides the rocket: 19 slots Nov 12 10:26:38 human kernel: [130788.903841] xennet: skb rides the rocket: 19 slots Unsure of what they mean, and Google has nothing meaningful. Any help is appreciated.

    Read the article

  • PowerPoint '10 avoid animation completion on click & advance slide or start new one

    - by ScottS
    Scenario I have PowerPoint 2010 On the "Transitions" tab the "Advance Slide On Mouse Click" check box is checked. I have a long, slow, timed, non-repeating animation working in the background of the slide. I click to advance the slide before the animation is finished, but ... Instead of advancing the slide, the animation moves to the completed state ... Forcing a second click to actually advance the slide. Additionally If I have other animations on the slide that are initiated by a click, the long animation also advances to a finished state before starting the new animation. Desired Behavior On click, I want the slide to advance or the next on-click animation to start whether the long animation is done or not, and without having that long animation first "complete" itself. In the case of another animation, I simply want the long animation to continue, while also doing the new animation. Ultimate Question Is there a way to either: Set an option somewhere to not have that animation complete on click and simply "continue" to animate with the start of a new animation or to advance the slide (as the case may be)? Create a VBA script that will produce the desired behavior for the long animation?

    Read the article

  • Configure mod_wsgi WSGIScriptAlias with mod_rewrite

    - by Lazik
    I want to redirect ex.com to www.ex.com but I still want www.ex.com/ to point to my app.wsgi without it showing up in the url. When I use the conf below and I go to ex.com, I get a 404 error saying can't find www.ex.com/app.wsgi/ If I change the WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi to WSGIScriptAlias /app.wsgi /var/www/vhosts/ex/app.wsgi Then all my url look like www.ex.com/app.wsgi/blabla/... Is it possible to use some kind of rule to redirect ex.com to www.ex.com and still keeping / as the app.wsgi root? my conf file <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • ubuntu 12.04 is unresponsive

    - by Andreas Schiemann
    I've been using ubuntu 12.04 for 3 weeks. Today it become unresponsive and I lost stuff I was working on, I had the hand displayed, as if I selected something but in fact never did, but was reading an article online. I finally powered down the machine and then turned it back on. In windows there is the Task Manager where one can see whats taking up all your CPU or RAM, is there a such a program in ubuntu 12.04 and if not, can I install a app that has the same use?

    Read the article

  • Symmetrix gatekeepers on Solaris 10

    - by Milner
    I have some Solaris machines that are connected to EMC Symmetrix for SAN storage. Apparently the Symm has a gatekeeper device that is used with the symmetrix CLI. We don't need the CLI, but I have these gatekeeper devices that constantly fill /var/adm/messages and the like with corrupt label errors. Is there anything I can do (short of deleting the devices on machine start) to get rid of them? Or should I just try to get our SAN guy to get the installer for the CLI? These things are getting annoying, and the devfsadmd daemon keeps rediscovering them on boot.

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

  • Set iDRAC IP Address on Ubuntu 12.04 without Reboot?

    - by BT643
    We have a Dell PowerEdge R720XD server and want to setup iDRAC, but this is a production server so we need to do it without a reboot if possible. We just need to set the IP address on the iDRAC but the servers do not have a front LCD panel so we cannot use this. The iDRAC will currently have the default 192.168.0.120 IP address but we are not on this IP range so cannot use this. We need to change it to 192.168.5.x. I've seen racadm mentioned, but before I start messing around installing this on Ubuntu, is this what we need? and will we be able to set the IP address on the DRAC with this? Thanks

    Read the article

  • Using virt-install to mount multiple cdrom drives/images

    - by Dana the Sane
    I would like to create a windows xp guest from the windows xp upgrade cd I have, along with one of a few full versions I have around. However, when I reach the stage in the installer where I am prompted to insert a full version cd, the installer can't find it, i.e.: Setup could not read the CD you inserted, or the CD is not a valid Windows CD.. Is there a work-around for this?, my Googling didn't uncover anything. I've tried various combinations of mounting .iso files and specifying disks, such as: $sudo virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 -c /dev/cdrom --check-cpu If I try to specify multiple cdrom drives, I receive an error: virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk /dev/cdrom,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 --check-cpu Starting install... ERROR IDE CDROM must use 'hdc', but target in use.

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • Ubuntu web server cluster checks Ubuntu repository for script updates with cron

    - by StuartTheY
    I have a cluster of Ubuntu 12.04 web servers running a lamp stack. All of these servers are connected to a Load Balancer on Amazon Web Services. What I want to be able to do is have a dedicated Ubuntu server that I can update the PHP files on and have the other web servers check with cron to get the updates files from the repository. They don't have to use cron but that was the only thing I could think of, unless there was a way to have the updated repository tell them that it has updated files. And then how to transfer those files. Also if there is a ways for a server to check for updated files when it boots because I am going to be using auto scaling on AWS so when there is an increase in the load and another server gets created I need it to download the updated files from the repository when launched. Not sure how to transfer files from server to server.

    Read the article

  • Problems with freetype on OSX 10.7.4

    - by eythor
    I'm trying to install mplayer with OSD using homebrew. I've added both --enable-menu and --with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config to the brew recipe. ==> Downloading http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.1.tar.xz Already downloaded: /Library/Caches/Homebrew/mplayer-1.1.tar.xz xz -dc "/Library/Caches/Homebrew/mplayer-1.1.tar.xz" | /usr/bin/tar xf - ==> ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 -- with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 --with -freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config Checking for cc version ... clang 4.2.1 (experimental support only) Checking for working compiler ... yes Detected operating system: Darwin Detected host architecture: x86_64 Checking for cross compilation ... no Checking for host cc ... cc Checking for CPU vendor ... GenuineIntel (6:15:10) Checking for CPU type ... Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz For freetype-config I've tried three seperate paths; /usr/X11R6/bin/freetype-config, /usr/X11/bin/freetype-config and the one in Cellar. Checking for freetype always fails: Checking for freetype >= 2.0.9 ... no Checking for fontconfig ... no (FreeType support needed) Although freetype itself seems to be installed. mufasa:bin eythor$ freetype-config --version 15.0.9 mufasa:bin eythor$ freetype-config --ftversion 2.4.10 mufasa:bin eythor$ freetype-config --libs -L/usr/local/Cellar/freetype/2.4.10/lib -lfreetype -lz -lbz2 mufasa:bin eythor$ freetype-config --cflags -I/usr/local/Cellar/freetype/2.4.10/include/freetype2 - I/usr/local/Cellar/freetype/2.4.10/include I'm not sure what to try next or how to figure out why freetype isn't recognized. Can anyone point me in a sensible direction?

    Read the article

  • How Do I Enable My Ubuntu Server To Host Various SSL-Enabled Websites?

    - by Andy Ibanez
    Actually, I Have looked around for a few hours now, but I can't get this to work. The main problem I'm having is that only one out of two sites works. I have my website which will mostly be used for an app. It's called atajosapp.com . atajosapp.com will have three main sites: www.atajosapp.com <- Homepage for the app. auth.atajosapp.com <- Login endpoint for my API (needs SSL) api.atajosapp.com <- Main endpoint for my API (needs SSL). If you attempt to access api.atajosapp.com it works. It will throw you a 403 error and a JSON output, but that's fully intentional. If you try to access auth.atajosapp.com however, the site simply doesn't load. Chrome complains with: The webpage at https://auth.atajosapp.com/ might be temporarily down or it may have moved permanently to a new web address. Error code: ERR_TUNNEL_CONNECTION_FAILED But the website IS there. If you try to access www.atajosapp.com or any other HTTP site, it connects fine. It just doesn't like dealing with more than one HTTPS websites, it seems. The VirtualHost for api.atajosapp.com looks like this: <VirtualHost *:443> DocumentRoot /var/www/api.atajosapp.com ServerName api.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> auth.atajosapp.com Looks very similar: <VirtualHost *:443> DocumentRoot /var/www/auth.atajosapp.com ServerName auth.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> Now I have found many websites that talk about possible solutions. At first, I was getting a message like this: _default_ VirtualHost overlap on port 443, the first has precedence But after googling for hours, I managed to solve it by editing both apache2.conf and ports.conf. This is the last thing I added to ports.conf: <IfModule mod_ssl.c> NameVirtualHost *:443 # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here NameVirtualHost *:443 Listen 443 </IfModule> Still, right now only api.atajosapp.com and www.atajosapp.com are working. I still can't access auth.atajosapp.com. When I check the error log, I see this: Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I don't know what else to do to make both sites work fine on this. I purchased a Wildcard SSL certificate from Comodo that supposedly secures *.atajosapp.com, so after hours trying and googling, I don't know what's wrong anymore. Any help will be really appreciated. EDIT: I just ran the apachectl -t -D DUMP_VHOSTS command and this is the output. Can't make much sense of it...: root@atajosapp:/# apachectl -t -D DUMP_VHOSTS apache2: Could not reliably determine the server's fully qualified domain name, using atajosapp.com for ServerName [Thu Nov 07 02:01:24 2013] [warn] NameVirtualHost *:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost auth.atajosapp.com (/etc/apache2/sites-enabled/auth.atajosapp.com:1) *:80 is a NameVirtualHost default server atajosapp.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost atajosapp.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • Backup server (OSX) like time machine to backup remote ubuntu 12.04 server [on hold]

    - by Mad
    I've searched my ass of for an good solution to backup my ubuntu server thats in a datacenter. Local we have an osx server with some external drives attached to it. This is for the local working stations that handle timemachine. What i like to do is fetch the files (or mount the root of my ubuntu server) and make an time machine backup from it. I just have one problem that if my osx server crashes i can't put back the system because it contains not only the osx server but also the ubuntu server from the data center. I've used Back in time on ubuntu to do the exact same thing but this was to Ubuntu (local) from Ubuntu (datacenter). So does anybody has an solution? Here are my requirements: Set time intervals for backups; need to be backed up nightly. Set time intervals for keeping backups; hourly, weekly, monthy etc Able to back up all computers and servers from an offsite location the local osx server (10.9). Manageable from that one location to login with ssh to do rsync or rsnapshot Has a GUI (osx) Act like time machine, backup only the files that has been changed. Restore to a point back in time.

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Ubuntu 12.04 on VMware Player loses network configuration

    - by d4ryl3
    I've been having this issue for 2 weeks now with my VMware Player-hosted Ubuntu 12.04. I only use it for my LAMP stack. I've had no issues with it before until about 2 weeks ago when it almost always (once per day at least) loses its network configuration. On boot it shows: Waiting for network configuration... Waiting up to 60 more seconds for network configuration... Booting system without full network configuration... Then when I do ifconfig -a it doesn't show an IP Address and couldn't get online. The only resolutions I've found so far was either to reinstall VMware Tools or use the VMware Player installer and choose Repair. This is frustrating to me because even when the issue was resolved after doing either of the steps I mentioned, the IP Address gets changed. Then I'd have to update the Remote Configuration of my IDE (Netbeans) and my database manager. What could possible cause this? Please help. Thank you. Additional details: I'm using a laptop with Windows 7 and connected to the office WiFi, which is unrestricted as far as I know. Thanks again.

    Read the article

  • Ubuntu not connecting to network in Hyper-V

    - by soandos
    I am unable to connect the Ubuntu guest (both 12.10 and 12.04) to the internet via hyper-V. Here is what I have done so far (with much thanks due to @Kronos's blog post on the topic): Created a switch in the switch manager with connection set to external, selected my wifi card (Intel Centrino Ultimate-N 6300 AGN). If it matters, the Microsoft Filtering Platform is checked under extensions. Added this switch to my Ubuntu guest. I also tried a different wireless card (Aethros 9285) same issue. Connecting through my wired card works just fine (assume that I select that card, and I am wired in of course). Making it a legacy network adapter does not fix the issue. Ubuntu can see this connection, but is unable to connect to it. What follows is what I attempted to do to get Ubuntu to connect: Start and restart the network manager Restart the machine Verify that it could in fact see the adapter (resulted in device not ready a few times) How can I get this to work properly?

    Read the article

  • How to mount /tmp in /mnt on EC2?

    - by Claudio Poli
    I was wondering what is the best way to mount the /tmp endpoint in the ephemeral storage /mnt on an EC2 instance and give the ubuntu user default write permissions. Some suggest editing /etc/rc.local this way: mkdir -p /mnt/tmp && mount --bind -o nobootwait /mnt/tmp /tmp However that doesn't work for me (files differs). I tried editing the default fstab entry: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 replacing /mnt with /tmp and and giving it a umask=0777, however it doesn't work because of cloudconfig. I'm using Ubuntu 12.04. Thanks.

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >