Search Results

Search found 4830 results on 194 pages for 'conf'.

Page 132/194 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • How to make your apache application accessible within network

    - by guest
    I have a Windows XP machine where I have installed WAMP and made a PHP based web application. I can access the web application from within this machine by using the browser and pointing to: http://localhost/myApp/ --- and the page loads fine. Now I want this site (http://localhost/myApp) to be accessible to all machines within the network (and may be later, to the general public as well). I am quite new to this, how do I make my site accessible to all machines within the network and to the general public in the internet? I tried modifying the httpd.conf file in Apache (WAMP) by changing Listen 80 to Listen 10.10.10.10:80 (where I replaced 10.10.10.10 with the actual IP of this windows xp machine). I also tried "Put Online" feature in WAMP. None seem to work though. How do I make it accessible?

    Read the article

  • Application specific environment on the same server in Nginx/Passenger

    - by dexter
    I have two Rails applications (say app1 and app2) deployed using Nginx/Passenger. The server definition in nginx.conf looks like this: server { rails_env demo; client_max_body_size 50M; listen 80; server_name localhost; root /data/apps; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; } You can see that both are configured to use demo as the RAILS_ENV. How should I change my configuration to run both the apps in different environments. Let's assume app2 is suppose to run with RAILS_ENV=qa and app1 with RAILS_ENV=demo

    Read the article

  • LVM incorrectly reported missing after power failure

    - by mensi
    We have had a major power failure in the data-center. We are using a set of servers for our storage needs. The main server has several pairs of disks mirrored with mdadm. The resulting /dev/mdX are LVM physical volumes and belong to a big volume-group with all our data. After the powerloss, we had the problem that one of the mdadm devices was not auto-detected due to a missing entry in mdadm.conf. As a consequence, the volumegroup had inactive logical volumes due to the missing PV. We were able to fix the mdadm config and reboot. pvscan shows all expected PVs but one LV still does not come up. vgdisplay shows: [...] Cur PV: 3 Act PV: 2 [...] Neither vgscan nor pvscan show any missing devices. What went wrong? How can we force LVM to activate all PVs?

    Read the article

  • how to serve php files on a Apache server (localhost) running Coldfusion/MySql?

    - by frequent
    I'm still learning my ways around on my localhost server, whih is running Apache 2.2, Coldfusion8 and MySQL Server 5.5 (on Windows XP). I need to work on a site I inherited, which also ran some PHP scripts under the same setup. I have installed PHP5 on my localhost, but when I open a dummy page with: <?php phpinfo();?> I only get plain text returned, so I guess I haven't configured Apache correctly to also serve PHP (while defaulting to Coldfusion). Question: Where do I need to get started if I want PHP to work on my current setup, too? Is there something I need to add to the httpd.conf file? If possible I don't want to uninstall/reinstall everything, because it took forever to get everything to work (excluding php). Thanks for any pointers!

    Read the article

  • How to make a backup VPN server?

    - by akalenuk
    I have a small VPN network with a bunch of clients working mostly with each other and a VPN server. Everything works fine, except, obviously I can't shut VPN server down without breaking the network. I have a spare machine, which worked as an VPN server for the same network before so it is signed with the same SA as the first one and basically configured just the same as the first one. Technically I can make my clients work with it with little adjustment (by setting remote in etc/openpvn/clientx.conf), but it would be great make this switch automated. So basically I want two VPN servers running in the same network to work completely interchangeable without clients even knowing this. Can I do this with VPN or should I dig deeper into physical network layer?

    Read the article

  • How can I set clean urls (enable rewrite) if I don't have a domain ?

    - by Patrick
    In order to enable clean urls in Drupal, I add the lines below to the lighttpd configuration file. However I'm now working on a local server and I don't have a domain available. So I need to work with this address http://local.ip/Sites/mywebsite I've tried to replace ["host"] with ["socket"] and replace the domain with ip and subfolders (see address above), but unsuccessfully. How can I set the configuration file to set clean urls even if I don't have a domain ? thanks $HTTP["host"] =~ "(^|\.)mywebsite\.com" { server.document-root = "/var/www/sites/mywebsite" server.errorlog = "/var/log/lighttpd/mywebsite/error.log" server.name = "mywebsite.com" accesslog.filename = "/var/log/lighttpd/mywebsite/access.log" include_shell "./drupal-lua-conf.sh mywebsite.com" url.access-deny += ( "~", ".inc", ".engine", ".install", ".info", ".module", ".sh", "sql", ".theme", ".tpl.php", ".xtmpl", "Entries", "Repository", "Root" ) # "Fix" for Drupal SA-2006-006, requires lighttpd 1.4.13 or above # Only serve .php files of the drupal base directory $HTTP["url"] =~ "^/.*/.*\.php$" { fastcgi.server = () url.access-deny = ("") } magnet.attract-physical-path-to = ("/etc/lighttpd/drupal-lua-scripts/p-.lua") }

    Read the article

  • Filename Case issue with over WebDav

    - by user98365
    We are accessing SAMBA shared directory from a Windows Client with WebDav client WebDrive. But we are having the issue that it is showing same contents in both the directories ( data/ & Data/ ) though they are entirely different. I know this is because of Windows Filesystem being case insensitive and Linux being Case Sensitive. is there any solution for this? We had the same issue when viewed through the SAMBA mounted directory but we solved it by editing the SMB.conf as said in the following link Does Samba work well with Windows when case-sensitive names are enabled? Please help to solve this when accessed from the WebDav

    Read the article

  • Trying to make mod_rewrite to work on Windows

    - by Psyche
    Hello guys, I'm having some trouble confinguring Apache mod_rewrite on Windows. I'm using latest version of XAMPP on Windows Vista. Here's my httpd.conf file: LoadModule rewrite_module modules/mod_rewrite.so <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory "D:/Server"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> My .htacces file looks like this: Options +FollowSymLinks RewriteEngine On RewriteBase /wcc/ RewriteRule ^red-wines/$ /red-wines.php [L] When I try to access http://localhost/wcc/red-wines/ I get a 404 not found error. Any ideea why? Thanks.

    Read the article

  • nginx: handling 404 with error_page

    - by ytw
    Originally, I have something like this in the nginx.conf file. location ^~ /test_api { types { application/json json; } root /usr/local/www/data; rewrite "/test_api/(.*)" /api_response/test_api_$1.json break; error_page 404 /api_response/unknown_request.json; } When a requested resource is not found locally, unknown_request.json (default response) is returned correctly. Then I had to change the rewrite to point to a remote server as follows: rewrite "/test_api/(.*)" $scheme://www.somedomain.com/test_api_$1 break; It doesn't return unknown_request.json (default response) anymore even though the remote server returns a 404. Is there a way to continue to return unknown_request.json to the client when the remote server returns a 404 assuming the remote server can't be changed to return unknown_request.json? Thanks very much.

    Read the article

  • mod_security: How to allow ssh/http access for admin?

    - by mattesque
    I am going to be installing mod_security on my AWS EC2 Linux instance tonight and need a little help/reassurance. The only thing I am truly worried about right now is making sure my (admin) access to the instance and webserver is maintained w/o compromising security. I use ssh (port 22) and http (80) to access this and I've read horror stories from other EC2 users claiming they were locked out of their sites once they put up a firewall. So my question boils down to: What settings should I put in the mod_security conf file to make sure I can get in on those ports? IP at home is not static. (Hence the issue) Thanks so, so, so much.

    Read the article

  • .htaccess rules not working, but the file seems to be loaded

    - by user221877
    I am trying to remove .php at the end of the URL from any page thats loaded. RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^(.*)$ $1.php Its running on my own server, which has WHM/cPanel, so I can change settings at the server level, I'm just not really sure what I'm looking for. I found the httpd.conf file, but it said it was auto generated by whm, so I tried looking in whm for the correct settings but it had barely any settings related to htaccess. If I fill htaccess with gibberish it stops the site from loading, which I assume means that the .htaccess file is being loaded, so I'm not sure what the issue is.

    Read the article

  • Apache mod_rewrite not working properly on Mac OS X 10.6 (Snow Leopard)

    - by DashRantic
    Hello all, I'm trying to create a PHP website with clean URLs with Apache's mod_rewrite, using a .htaccess file. mod_rewrite seems to be working, however, it claims it cannot find files on my server that do exist. Just as a basic test, this is what my .htaccess file looks like at the moment--going to [mysite]/page should redirect to the index.php file: Options +FollowSymLinks RewriteEngine on RewriteRule ^page$ index.php Afaik, I have setup the .conf file appropriately as well: <Directory "/Users/myuser/Sites/"> Options Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory> However, when I try accessing the URL setup via mod_rewrite ( localhost/~myuser/mysite/page ), I get this: Not Found The requested URL /Users/myuser/Sites/mysite/index.php was not found on this server. However, that file does exist, and that is the proper location! The site works fine otherwise, if I go to localhost/~myuser/mysite/index.php, everything works fine--minus any sort of clean URLs, of course. Has anyone seen this before/have any ideas as to what I'm doing wrong?

    Read the article

  • 2nd apache server fails to start

    - by ito3
    HI, I have determine that my 2nd server which fail to start because of this entry in conf. Once I remove this entry, the server start up as normal. Alias /Reports/ "//abc/filedir/a/" <Directory "//abc/filedir/a/"> Order allow,deny Allow from all </Directory> I have a primary apache server which is also pointing to the folder with the same setting. I will like to know why the 2nd server failed to start, is it because the server one has locked the folder. //abc is my NAS server running on window 2003. Thanks

    Read the article

  • Why is this server redirecting to another page???

    - by Mike L.
    I am building a site for a client. For a reason unknown to me www.domain.com forwards to www.domain.com/directory/home.html. If i type www.domain.com/index.php it works correctly. I have checked .htaccess there was nothing there, so I set the index to index.php which works fine in every directory other than the root directory. I have root access and have checked the httpd.conf (did a search in VI for the document that I was being redirected to) and anything else I could think of. Where should I look next? The server is a VPS running CentOS 5.5 with multiple domains, has CPanel WHM 11 for root access and CPanel X installed for each domain.

    Read the article

  • apache 2.2: how can I set up a VirtualHost inside the RootDirectory?

    - by redraw
    I want to set up a VirtualHost inside the RootDirectory. For example, My project is in C:/myproject and I want to access with http://localhost/myproject EDIT: I've made an alias inside the httpd-vhosts.conf, however I don't have permissions. <VirtualHost *:80> DocumentRoot "C:/apache-2.2/htdocs" ServerName localhost Alias /test "D:\arbol\documentos\test" </VirtualHost> Is this code below the proper way to give permissions? <VirtualHost *:80> DocumentRoot "C:/apache-2.2/htdocs" ServerName localhost Alias /test "D:\arbol\documentos\test" <Directory "D:\arbol\documentos\test"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost>

    Read the article

  • xf86OpenConsole: Cannot find a free VT: Invalid argument

    - by Oliver Seeliger
    I'v set up an Ubuntu 12.04 from the precreated OpenVZ template. The host system is configured as follows: # $ cat /etc/issue Debian GNU/Linux 6.0 # $ uname -a Linux openvz-02 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 GNU/Linux # $ apt-cache showpkg proxmox-ve-2.6.32 Package: proxmox-ve-2.6.32 # $ tail -n 3 /etc/apt/sources.list # PVE packages provided by proxmox.com deb http://download.proxmox.com/debian squeeze pve For a software project I need a minimal xserver and followed the instructions at https://help.ubuntu.com/community/ServerGUI. I simply installed the package xorg (xorg 1:7.6+7ubuntu7.1). Now when I 'startx' I get an error message Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument The complete output of startx # startx X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu Current Operating System: Linux www 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 Kernel command line: quiet Build Date: 29 August 2012 12:12:33AM xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 20 08:46:04 2012 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • (Solved) ERROR: Packet source 'wlan0' failed to set channel 2: mac80211_setchannel() in Kismet and Ubuntu 12.10

    - by M. Cunille
    I have installed Ubuntu 12.10 in my computer with an Atheros AR5007 wireless card. I want to use Kismet but when I run it it starts displaying the message: ERROR: Packet source 'wlan0' failed to set channel X: mac80211_setchannel() It keeps displaying the same for every channel except channel 1. I have installed the compat-wireless-3.6.6-1 drivers and patched them with the following patch in order to use them with aircrack-ng. I have installed the latest version of Kismet in the git repository and I even tried with the svn but it keeps displaying the same error. I also have set the kismet.conf file with the nsource=wlan0 as it is the name of my wireless interface according to iwconfig : lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:"XXXX" Mode:Managed Frequency:2.412 GHz Access Point: XX:XX:XX:XX:XX:XX Bit Rate=18 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=28/70 Signal level=-82 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:282 Missed beacon:0 I haven't found any answer since similar errors are supposed to be fixed with the latest Kismet release but this isn't my case. Any help will be appreciated. Thank you!

    Read the article

  • Samba smbtree No output; failed to retrieve share list

    - by TomKat
    I'm using Ubuntu 12.10 on two machines, one laptop & one desktop. Faced the same problem when I used 12.04. When I try connecting to the other machine using 'Connect to Server', I give the correct user & workgroup details, but the system displays 'failed to retrieve server list' I've tried editing /etc/samba/smb.conf file to: name resolve order = lmhosts host wins bcast I also added the other machine to /etc/hosts But, nothing worked. The output of smbtree is naveen@tomkat:~$ smbtree Enter naveen's password: naveen@tomkat:~$ Used sources to solve problem myself: "Failed to retrieve share list from server" error when browsing a share with Nautilus http://ubuntuforums.org/showthread.php?t=1114038 All help will be appreciated. Thanks. UPDATE: After 'purging' samba and all its components (incl all config files), and reinstall, sharing workes in one direction (from Laptop to Desktop) but when I attempt to use the Desktop as server, same problem is still faced. UPDATE 2 dpkg -l|grep samba output: naveen@tomkat:~$ sudo dpkg -l|grep samba [sudo] password for naveen: ii libcrypt-smbhash-perl 0.12-3 all generate LM/NT hash of a password for samba ii samba 2:3.6.6-3ubuntu5 i386 SMB/CIFS file, print, and login server for Unix ii samba-common 2:3.6.6-3ubuntu5 all common files used by both the Samba server and client ii samba-common-bin 2:3.6.6-3ubuntu5 i386 common files used by both the Samba server and client

    Read the article

  • Permissions problem running Apache ActiveMQ

    - by Edd
    I'm wanting to use Apache ActiveMQ on Ubuntu 12.04 LTS, but am running into what looks like a permissions problem when I try to run it as follows: edd:~$ sudo activemq --version INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: changing to user 'activemq' to invoke java Java Runtime: Sun Microsystems Inc. 1.6.0_24 /usr/lib/jvm/java-6-openjdk-amd64/jre Heap sizes: current=502464k free=499842k max=502464k JVM args: -Xms512M -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Dactivemq.classpath=/var/lib/activemq//conf;; -Dactivemq.home=/usr/share/activemq -Dactivemq.base=/var/lib/activemq/ ACTIVEMQ_HOME: /usr/share/activemq ACTIVEMQ_BASE: /var/lib/activemq ActiveMQ 5.5.0 For help or more information please see: http://activemq.apache.org edd:~$ sudo activemq start INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details INFO: changing to user 'activemq' to invoke java -su: line 2: /var/run/activemq.pid: Permission denied INFO: pidfile created : '/var/run/activemq.pid' (pid '7811') edd:~$ sudo activemq status INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' ActiveMQ not running edd:~$ ps ax | grep 'activemq' 8040 pts/0 S+ 0:00 grep --color=auto activemq I installed ActiveMQ using sudo apt-get install activemq. Apologies if there's any additional information missing - I'm fairly new to Linux as you may well have guessed!

    Read the article

  • Postgresql fails to start on Ubuntu 10.04.4 LTS

    - by cancerballs
    I installed postgresql 9.2 from add-apt-repository ppa:pitti/postgresql using apt-get install postgresql-9.2 At the end of the install and every time I try to launch postgresql by using the following command /etc/init.d/postgresql start or service postgresql start I get this error: Error: could not exec /usr/lib/postgresql/9.2/bin/pg_ctl /usr/lib/postgresql/9.2/bin/pg_ctl start -D /var/lib/postgresql/9.2/main -l /var/log/postgresql/postgresql-9.2-main.log -s -o -c config_file="/etc/postgresql/9.2/main/postgresql.conf" : [fail] invoke-rc.d: initscript postgresql, action "start" failed. dpkg: error processing postgresql-9.2 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: postgresql-9.2 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried everything found here: How to thoroughly purge and reinstall postgresql on ubuntu and here: Eliminating non working postgresql installations on ubuntu 10-04 and starting af. I have also done dpkg -P --force-remove-reinstreq postgresql-client-9.2 in my attempt to remove everything postgres related from my server. After removing postgresql I have used dpkg --get-selections | grep postg To be sure there is nothing left and I can do a clean install. I have also made sure that the files and folders mentioned in the error message have the right permissions. The /var/log/postgresql/postgresql-9.2-main.log file is empty. I have tried installing every postgresql version from 8.3 to 9.2 and I get the same error on every time. I once managed to compile postgresql from the source provided on their website but then I encountered weird errors with psycopg2 so I figured I'd install postgresql this way and avoid those errors. Also when I type apt-get install postgresql it by default tries to install the 8.3 version even when I can find the package by typing apt-get install postgresql-9.2.

    Read the article

  • sshfs with fstab: connection reset by peer

    - by user171348
    I am trying to allow my laptop (Ubuntu 13.04) to access my PC (Lubuntu 13.04) hard drive through SSHFS. I'm using RSA keys to connect. It works perfectly fine if I type this in the terminal: sshfs my-PC:/a_folder /media/a_folder But I would like it to be mounted automatically when I boot my laptop. So I added myself to the fuse group: sudo adduser mynickname fuse And I added the following line to my fstab file: sshfs#mynickname@my-PC:/a_folder /media/a_folder fuse defaults,idmap=user,_netdev 0 0 When I boot the laptop, a_folder appears in the list of devices, but is not mounted. When I try to access it through Nautilus, it displays the following error: mount: only root can mount sshfs#mynickname@my-PC:/a_folder on /media/a_folder I get the same error if I try mount /media/a_folder in a terminal. If I try sudo mount /media/a_folder I get read: Connection reset by peer I tried to add "allow_other" as an option in the fstab entry, and uncommented the related line in /etc/fuse.conf, but it didn't change anything. The user "mynickname" is the owner of the folder /media/a_folder and has rwx permissions. I looked at many threads on the internet about people with quite similar issues, but nothing worked so far. Usually, people can't even do sshfs my-PC:/a_folder /media/a_folder without getting an error, whereas this works fine on my laptop. Any insight and tips will be greatly appreciated! Thanks.

    Read the article

  • APT: Hold packages back from updates without APT Pin

    - by David
    I know about pinning packages with APT; that's not what I want to do. Other questions have been answered with either using pinning or by using pins temporarily. I don't want to do this... What I want to do is keep packages back the same way the kernel has been: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: I want to add tomcat-* and mysql-* and sun-* to this list. In the past, there was a configuration parameter to do this - I've always thought it was something like Apt::Get::HoldPkgs or Apt::HoldPkgs but I can't find it. I want to have these packages held from updates until I specifically request them with an "apt-get install". I found the apt-get configuration Apt::NeverAutoRemove; will this do what I want? Added Question: I notice that Apt::NeverAutoRemove and Apt::Never-MarkAuto-Sections (among others) are not documented so far as I can see; they're not in the manpages. Neither is aptitude::Keep-Unused-Pattern and aptitude::Get-Root-Command. Is there any comprehensive and complete documentation for apt.conf?

    Read the article

  • Step by Step Install of MAAS and JUJU

    - by John S
    I am working on understanding the pieces that I am missing in being able to deploy Juju across the other MAAS nodes. I don't know If I have a step out of place, or missing a few. The server owns the router which handles the DHCP and DNS. Any assistance is greatly appreciated. When I am at the end I will either get a 409 error, or arbitrary pick tools 1.16.0 error. It is worth mentioning that local, and aws works fine. Hopefully with all of these steps spelled out it will help someone else along the way too. Steps Setting Up MAAS and JUJU - 12.04 LTS Clean install SSH only from the package selection during install sudo apt-get install software-properties-common sudo apt-get install python-software-properties sudo add-apt-repository ppa:maas-maintainers/stable sudo add-apt-repository ppa:juju/stable sudo apt-get update sudo apt-get dist-upgrade sudo reboot sudo apt-get install maas maas-dns maas-dhcp sudo ufw disable sudo reboot - edit /etc/dhcp/dhcpd.conf authoritive subnet 10.0.0.0 netmask 255.255.255.0 { next-server 10.0.0.2; filename "pxelinux.0"; } sudo maas createsuperuser sudo maas-import-pxe-files Login to MAAS http://10.x.x.x/MAAS cluster controller configuration for eth0 manage dhcp and dns IP 10.0.0.2 subnet 255.255.255.0 broadcast 10.0.0.0 routerip 10.0.0.1 ip low 10.0.0.5 ip high 10.0.0.180 Commissioning default and distro is set at 12.04 default domain is at local sudo maas-cli login maas http://10.x.x.x/MAAS/api/1.0 api-key ssh-keygen -t rsa -b 2048 - enter - no password - cat id_rsa.pub and enter key into MAAS ssh sudo maas-cli maas nodes accept-all (interestingly enough I only get back [] when executing this ) PXE one machine, accept and commision, start and deploy. sudo apt-get install juju-core juju-local MAAS config: maas: type: maas maas-server: '://10.x.x.x:80/MAAS' maas-oauth: 'MAAS_API_KEY' admin-secret: 'nothing' default-series: 'precise' juju switch maas sudo juju bootstrap --show-log

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

  • error 503: service unavailable when using apt-get update behind proxy

    - by ubuntu2man
    Hi, I am using a transparent proxy (other box). When I try to do an 'apt-get update' I get these warnings (in german): ... W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/restricted/source/Sources.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/universe/source/Sources.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/multiverse/source/Sources.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/main/binary-i386/Packages.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/restricted/binary-i386/Packages.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/universe/binary-i386/Packages.gz 503 Service Unavailable E: Einige Indexdateien konnten nicht heruntergeladen werden, sie wurden ignoriert oder alte an ihrer Stelle benutzt. I changed ~.bashrc: http_proxy=http://192.168.120.199:8080 https_proxy=https://192.168.120:8080 export http_proxy export https_proxy I wrote on commandline: export http_proxy=http://proxyusername:proxypassword@proxyaddress:proxyport sudo apt-get update And I edited /etc/apt/apt.conf: Acquire::http::proxy "http://192.168.120.199:8080/"; Acquire::ftp::proxy "http://192.168.120.199:8080/"; Nothing has worked. Does anyone knows how to make apt-get working through a transparent proxy? Regards, ubuntu2man

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >