Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 274/792 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • Are there any tools for monitoring individual Apache virtual hosts in real-time?

    - by Dave Forgac
    I'm looking for a way to monitor and record Apache traffic, separated by virtual host. I am currently using Munin to capture this and other data for the entire server however I can't seem to find a way to do this by vhost. This link describes using a module called mod_watch which is apparently no longer in development: http://www.freshnet.org/wordpress/2007/03/08/monitoring-apaches-virtualhost-with-munin/ The file that is listed as being compatible with Apache 2.x is reported to have problems with missing vhosts an reporting data correctly. Does anyone know of a reliable way to determine real-time traffic per vhost? If I can find this it should be easy enough to write a new Munin plugin. Edit: What I'd really like to see is something similar to the Apache server-status scoreboard page with the number of connections / requests separated by virtual host. This would give me the ability to check which vhost may be experiencing a spike in traffic in real time and would also provide the data needed for a Munin module (or some alternative performance monitoring / analysis system.)

    Read the article

  • KVM online disk resize?

    - by Eil
    We're evaluting KVM for Linux virtualization on a few projects. All is going well so far. But one of our requirements is the ability to add disk space to a running guest without rebooting or taking it offline. Is this possible with KVM? The only thing I've found so far (but have not tested yet) is the ability to hotplug disks into the machine. If I go this route, then I could always add the new disk to an LVM volume group on the guest and then extend the chosen logical volume. The biggest downside to this approach is that over time we might end up with guests having variable numbers of virtual disks. The "real" disk space would be provided to the host over a SAN, so we can always add more space to the host whenever.

    Read the article

  • Wrong DNS query in Active directory network with NetBIOS enabled client

    - by koankoder
    The setup: Active Directory is enabled on the network (abcd.com) We have a single character host name (1.abcd.com) one of the desktop has an old XP with NetBIOS stuff enabled The Problem Whenever we query for any host name from the XP machine, the first character alone is taken for DNS query (one.abcd.com will query for o.abcd.com, two.abcd.com will query for t.abcd.com) Even if we give some IP, the application queries with numeric prefix (10.x.x.x will query for 1.abcd.com).Since we already have 1.abcd.com, all query and traffic ends up in 1.abcd.com After discussion with network guys, it seems netbios DNS queries by having some prefix etc. but none of them is actually sure on what is happening. Is there any docs which can explain this behavior ? Is this valid behavior in NetBIOS environment ?

    Read the article

  • Enable basic auth sitewide and disabling it for subpages?

    - by piquadrat
    I have a relatively straight forward config: upstream appserver-1 { server unix:/var/www/example.com/app/tmp/gunicorn.sock fail_timeout=0; } server { listen 80; server_name example.com; location / { proxy_pass http://appserver-1; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /path/to/htpasswd; } location /api/ { auth_basic off; } } The goal is to use basic auth on the whole website, except on the /api/ subtree. While it does work with respect to basic auth, other directives like proxy_pass are not in effect on /api/ as well. Is it possible to just disable basic auth while retaining the other directives without copy&pasting everything?

    Read the article

  • Audio doesn't work on Windows XP guest (WS 7.0)

    - by Mads
    I can't get audio to work with on a Windows XP guest running on VMware Workstation 7.0 and Ubuntu 9.10 host. Windows fails to produce any audio output and the Windows device manager says the Multimedia Audio Controller is not working properly. Audio is working fine in the host OS. When I open Multimedia Audio Controller properties it says: Device status: The drivers for this device are not installed (Code 28) If I try to reinstall the driver I get the following error message: Cannot Install this Hardware There was a problem installing this hardware: Multimedia Audio Controller An Error occurred during the installation of the device Driver is not intended for this platform Has anyone else experienced this problem?

    Read the article

  • ForwardAgent in Jenkins

    - by r_2
    I'm trying to enable ForwardAgent in the "Publish over SSH" Jenkins Plugin. This would allow jenkins to execute deployments, rsyncs and svn+ssh checkouts on remote servers. But there's no option for this in the GUI. ForwardAgent is set to yes in /etc/ssh/ssh_config and in /var/lib/jenkins/.ssh/config, but when Jenkins jobs login over ssh, the remote session does not have the key loaded in agent. ("Could not open a connection to your authentication agent.") Is there a way to force ForwardAgent, or a better way to do this (via a Jenkins slave)? Thanks for any ideas, much appreciated!

    Read the article

  • Make mysqldump output USE statements or full table names when dumping a single table with where clause

    - by tobyodavies
    Is it possible to get mysqldump to output USE statements for a single (partial) table dump? I've already got some scripts that I'd like to reuse which run mysqldump with some arguments and apply them to a remote server. However, since I haven't bothered to parse all the arguments to mysqldump, and there is no USE in the dump, the remote server is saying no database selected. I'm a programmer more than anything else, so I can easily use sed to modify the dump before applying it in the worst case, but those scripts won't allow me to do this as I don't have access to the dump between creation and application. EDIT: the ability to output fully qualified table names may also solve my problem

    Read the article

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.) So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    Read the article

  • 3 monitors windows xp

    - by c0mrade
    I have a laptop mounted to a dock. When I try to use 2 monitors using dualview everything works fine I get my image across 2 monitors, but when I connect the third one I can't use it although its recognized. Here's what I'd like to do, I'd like to get dual view from my laptop across two monitors, and I have a remote machine connecting to it via remote desktop. I'd like to have that displayed on the third monitor. Is that possible? or some other combination?

    Read the article

  • Apache2 mod_proxy and post-multipart size

    - by Pietro
    Hi, I have Apache2 configured to proxy all traffic directed to a specific virtual host to a local tomcat instance. All is good and fine but for multipart posts larger than ~100kb. Such posts fail on the tomcat end with an exception like SocketTimeoutException. If I connect directly to Tomcat (which listens on a port != 80) then all posts are handled just fine. The Apache virtual host config goes like this: NameVirtualHost * SetOutputFilter DEFLATE <VirtualHost *> ServerName foo.bar.com ErrorLog c:/wamp/logs/foo_error.log CustomLog c:/wamp/logs/foo_access.log combined ProxyTimeout 60 ProxyPass / http://localhost:10080/foo/ ProxyPassReverse / http://localhost:10080/foo/ ProxyPassReverseCookieDomain localhost bar.com ProxyPassReverseCookiePath /foo / </VirtualHost> I tried browsing the Apache2 and mod_proxy docs but found nothing useful. Any idea why Apache2 refuses to proxy requests bigger than X bytes ? Thanks!

    Read the article

  • Directory listing through FTPS (TLS) is not working

    - by Aron Rotteveel
    We recently switched our server to require TLS for every connection. This is working flawlessly so far, but one of our clients is having problems. Some facts: Server uses Pure-FTPD Server has a passive port range configured Server has no firewall limitations regarding the FTP Client uses WS FTP Client is behind a router Client connects to the same IP as every other, using PASSIVE mode All other clients have no trouble connecting Because of the TLS requirement, connecting using ACTIVE mode is almost not possible, but PASSIVE is working fine for everyone except this specific client. It seems that he is able to connect, but once a LIST command is performed, things go wrong. Log: Finding Host <clienthost> ... Connecting to <serverip:21> Connected to <serverip:21> in 0.020000 seconds, Waiting for Server Response Initializing SSL Session ... 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 5 of 50 allowed. 220-Local time is now 22:14. Server port: 21. 220-This is a private system - No anonymous login 220-IPv6 connections are also welcome on this server. 220 You will be disconnected after 15 minutes of inactivity. AUTH TLS 234 AUTH TLS OK. SSL session NOT set for reuse SSL Session Started. Host type (1): Automatic Detect USER <user> 331 User <user> OK. Password required PASS (hidden) 230-User <user> has group access to: <user> 230 OK. Current restricted directory is / SYST 215 UNIX Type: L8 Host type (2): Unix (Standard) PBSZ 0 200 PBSZ=0 PROT P 200 Data protection level set to "private" PWD 257 "/" is your current location CWD /public_html 250 OK. Current directory is /public_html PWD257 "/public_html" is your current location TYPE A 200 TYPE is now ASCII PASV 227 Entering Passive Mode (<serverip>,132,100) connecting data channel to <serverip>:132,100(33892) Substituting connection address <serverip> for private address <serverip> from PASV Using external address <customer ext. ip> instead of local address <customer int. ip> for PORT command PORT 82,161,56,225,195,181 200 PORT command successful LIST Error reading response from server. It appears that the connection is dead. Attempting reconnect... Any help is appreciated.

    Read the article

  • NTFS file size, how do you guys refresh it to view its current correct size

    - by Michael Goldshteyn
    I work in a command prompt quite often and around many large (remote) log files. Unfortunatelly, the sizes of these files do not update as the logs grow, unless it would appear the files are touched. I usually use hacks like the following from Cygwin to "touch" the file so that its file size updates: stat file.txt or head -c0 file.txt Are there any native Windows constructs that can refresh the file size from the command prompt, as unintrusively as possible and preferrably without transferring any (remote) data, since I often need to refresh the sizes of very large files remotely, to see how large they have grown.

    Read the article

  • opennebula VM submission failure

    - by user61175
    I am new to OpenNebula, the cloud is up and running but the VM is failed to be submitted to a node. I got the following error from the log file. ERROR: Command "scp ubuntu:/opt/nebula/images/ttylinux.img node01:/var/lib/one/8/images/disk.0" failed. ERROR: Host key verification failed. Error excuting image transfer script: Host key verification failed. The key verification keeps failing. I need to know what is going wrong ... thanks :)

    Read the article

  • VPN only connects to its server!

    - by Eddie
    Hi guys; Previously I bought a windows 2003 VPS and enabled routing and remote access so that users can make a vpn connection. I turend the firewall off and everything was working fine. But since 2 days ago whenever I try and connect to vpn it connects to vpn without any problem and I can see the connection status however it only connects to the server I mean what I can do with this vpn is to connect to the server via remote desktop and I can ping only the server's IP, neither I can open any webpages in browsers or ping other IP addresses beside the server one! I've also rebuilt the server and configured it for routing access and vpn connection from the beginning but it doesn't work either. It seems that server fails to route the traffic properly, as i'm sure that the firewall has been turned off I can't figure out what's the reason, any idea what's going on? Thanks in advance

    Read the article

  • Setting up Dynamic DNS for Wireless Cameras And Accessing them Remotely

    - by Mike Szp.
    I've been trying to set up two TP LINK wireless N cameras that I bought so that I can see them remotely. I've set it up so that each has it's own ip address (192...105/192...106) and I can access them if I type that into the browser of a local computer The thing is that I don't know how to access them from another remote PC. My current setup is a a each camera connected to the router which then connects to the modem. When I set up the Dynamic DNS, and I access the "webpage" for my IP through a remote computer, it just goes to the configuration page of the modem. I have no idea how to make it go to the router or to the cameras. the router has its own ip range of 192.168.1.x while the modem has 192.168.2.x To access the cameras I type into the web browser: 192.168.1.114:100 on the local computer but I have no idea how to get there through the webpage of my Dynamic DNS remotely.

    Read the article

  • Mysql stopped working

    - by tonymarschall
    Mysql is up and running on my system but i can not login with any user. I also cannot start/stop/status the server. All i got is: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES) From the logs: Mar 24 08:30:13 debian /etc/mysql/debian-start[1074]: Upgrading MySQL tables if necessary. Mar 24 08:30:13 debian /etc/mysql/debian-start[1078]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Mar 24 08:30:13 debian /etc/mysql/debian-start[1078]: Looking for 'mysql' as: /usr/bin/mysql Mar 24 08:30:13 debian /etc/mysql/debian-start[1078]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 24 08:30:13 debian /etc/mysql/debian-start[1078]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 24 08:30:13 debian /etc/mysql/debian-start[1078]: /usr/bin/mysqlcheck: Got error: 1045: Access denied for user 'debian-sys-maint'@'localhost' (using password: YES) when trying to connect Mar 24 08:30:13 debian /etc/mysql/debian-start[1078]: FATAL ERROR: Upgrade failed Mar 24 08:30:13 debian /etc/mysql/debian-start[1111]: Checking for insecure root accounts.

    Read the article

  • Virtual Network Interface and NAT disables localhost access for MySQL and Apache

    - by Interarticle
    I'm running an Ubuntu Server 12.04, and recently I configured it to do NAT for my laptop. Since the server has only one NIC, I followed instructions online to create a virtual network device (eth0:0) that has a LAN IP address, then further configured iptables and UFW to allow internet sharing. However, just a few days ago, I discovered that one of the PHP pages hosted on the server failed for no apparent reason. A little digging revealed that the MySQL server started refusing connections from localhost. The same happened with a page (PhpMyAdmin) that was configured to be accessible only from localhost (in Apache2). The error, as shown by $mysql --protocol=tcp -u root -p looks like ERROR 1130 (HY000): Host '<host name of eth0>' is not allowed to connect to this MySQL server However, the funny thing is, I configured the mysql server to allow root access from localhost (only). Moreover, the mysql server listens only on 127.0.0.1:3306, as shown by: sudo netstat -npa | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1029/mysqld which means that the connection could have only come from 127.0.0.1 (Note that MySQL is working because I can still connect to it via unix domain sockets) In effect, it seems that all tcp connections originating from 127.0.0.1 to 127.0.0.1 appear to any local daemon to come from the eth0 IP address. Indeed, apache2 allowed me to access PhpMyAdmin after I added allow <eth0 IP address>. The following are my network configurations (redacted): /etc/hosts: 127.0.0.1 localhost 211.x.x.x <host name of eth0> <server name> #IPv6 Defaults follows .... /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 211.x.x.x netmask 255.255.255.0 gateway 211.x.x.x dns-nameservers 8.8.8.8 # dns-* options are implemented by the resolvconf package, if installed dns-search xxxxxxx.com hwaddress ether xx:xx:xx:xx:xx:xx auto eth0:0 iface eth0:0 inet static address 192.168.57.254 netmask 255.255.254.0 broadcast 192.168.57.255 network 192.168.57.0 /etc/ufw/sysctl.conf: #Uncommented the following lines net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 /etc/default/ufw: DEFAULT_FORWARD_POLICY="ACCEPT" #Changed DROP to ACCEPT /etc/init/internet-sharing.conf (upstart script I wrote), section pre-start script: iptables -A FORWARD -o eth0 -i eth0:0 -s 192.168.57.22 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE Note again that my problem here is that programs cannot access localhost tcp services, from the server itself, and that access is blocked because the services have access control allowing only 127.0.0.1. I have no problem connecting (as in TCP connections) to services via tcp, even if the services listen only on 127.0.0.1. I do NOT want to connect to the services from another computer.

    Read the article

  • Redmine install not working and displaying directory contents - Ubuntu 10.04

    - by Casey Flynn
    I've gone through the steps to set up and install the redmine project tracking web app on my VPS with Apache2 but I'm running into a situation where instead of displaying the redmine app, I just see the directory contents: Does anyone know what could be the problem? I'm not sure what other files might be of use to diagnose what's going on. Thanks! # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ # Enable fastcgi for .fcgi files # (If you're using a distro package for mod_fcgi, something like # this is probably already present) #<IfModule mod_fcgid.c> # AddHandler fastcgi-script .fcgi # FastCgiIpcDir /var/lib/apache2/fastcgi #</IfModule> LoadModule fcgid_module /usr/lib/apache2/modules/mod_fcgid.so LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-3.0.7/ext/apache2/mod_passenger.so PassengerRoot /var/lib/gems/1.8/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby1.8 ServerName demo and my vhosts file #No DNS server, default ip address v-host #domain: none #public: /home/casey/public_html/app/ <VirtualHost *:80> ServerAdmin webmaster@localhost # ScriptAlias /redmine /home/casey/public_html/app/redmine/dispatch.fcgi DirectoryIndex index.html DocumentRoot /home/casey/public_html/app/public <Directory "/home/casey/trac/htdocs"> Order allow,deny Allow from all </Directory> <Directory /var/www/redmine> RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on </Directory> # <Directory /> # Options FollowSymLinks # AllowOverride None # </Directory> # <Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all # </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/casey/public_html/app/log/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /home/casey/public_html/app/log/access.log combined # Alias /doc/ "/usr/share/doc/" # <Directory "/usr/share/doc/"> # Options Indexes MultiViews FollowSymLinks # AllowOverride None # Order deny,allow # Deny from all # Allow from 127.0.0.0/255.0.0.0 ::1/128 # </Directory> </VirtualHost>

    Read the article

  • Apache 2 with Weblogic Plug-in Redirection, original location still requested to backend

    - by Edo
    We're trying to setup an SSL server in front of a Weblogic server using Apache as the SSL provider. Here's what's inside of our httpd.conf: <Location /original> SetHandler weblogic-handler WebLogicHost 10.11.1.1 WebLogicPort 8700 PathTrim /original PathPrepend /destination ConnectTimeoutSecs 60 </Location> <Location /destination> SetHandler weblogic-handler WebLogicHost 10.11.1.1 WebLogicPort 8700 ConnectTimeoutSecs 60 </Location> This setup works mostly, but in the ssl_error_log file there're these entries: [Wed Aug 11 14:59:00 2010] [error] [client xxx.xxx.xxx.xxx] ap_proxy: trying GET /original at backend host '10.11.1.1/8700; got exception 'CONNECTION_REFUSED [os error=0, line 1739 of ../nsapi/URL.cpp]: Error connecting to host 10.11.1.1:8700' The weird thing is, the redirection still works, but these annoying entries still shows up. Anyone can point out where did we go wrong? Thanks.

    Read the article

  • loadbalancing with difference nginx location context and backend server context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • Route from Cisco ASA over site to site VPN

    - by Wookie321
    I want to be able to push f/w logging traffic to a server at a remote site. This server is accepting syslog traffic on port 514. In the ASA I've configured it to use this server as a syslog server. The Cisco f/w's inside interface address is 10.0.0.1 and I want to route over the link to an address of 192.168.1.1. The vpn is up and working between sites, and local clients at each site can access resources etc. How would I go about setting up the route from the f/w to this remote server only?

    Read the article

  • Is it possible to rate-limit an scp/sftp/rsync/etc transfer from the command-line? ie, manual QoS on

    - by warren
    Specifically, I am looking to rate-limit an scp or sftp session (or other arbitrary network call) in the call itself. For example, let's say I want to copy 100MB to one server, and 1GB to another. I'd like to be able to run both of these at the same time, but maintain a QoS for "normal" computer usage - somewhat similar to how you can rate-limit bittorrent. Is there a way to do this without touching the networking hardware? I'm envisioning something akin to: magic-qos-tool 'scp file user@host:/path/to/file' Or.. scp -rate 40kbps file user@host:/path/to/file

    Read the article

  • Exchange server not serving mobile devices - how to troubleshoot?

    - by chickeninabiscuit
    Our exchange server has suddenly stopped serving mobile devices. Attempts to connect result in our ActiveSync server returning HTTP 500. It is serving outlook clients fine. Our server is Windows 2003 SBS 6.5 SP2 There are no abnormal events in the system log. I ran the "Exchange ActiveSync with AutoDiscover" at https://www.testexchangeconnectivity.com/ I've notice an abnormality in the exchange properties, Log File Directory shows: Access denied. Facility: Win32 ID no: 80070005 Exchange System Manager As shown in the following image: I think it may be related to a recent issue we had here: http://serverfault.com/questions/40222/windows-server-2003-suddenly-unable-to-connect-to-anything We followed a procedure to reinstall TCP/IP: http://support.microsoft.com/kb/325356 I've run the "exchange activesync" connectivity test at testexchangeconnectivity.com: Attempting to Resolve the host name mail.immersive.com.au in DNS. Host successfully Resolved Additional Details IP(s) returned: 221.133.203.229 Testing TCP Port 443 on host mail.immersive.com.au to ensure it is listening/open. The port was opened successfully. Testing SSL Certificate for validity. The certificate passed all validation requirements. Test Steps Validating certificate name Successfully validated the certificate name Additional Details Found hostname mail.immersive.com.au in Certificate Subject Common name Validating certificate trust for Windows Mobile Devices Certificate is trusted and all certificates are present in chain Additional Details Certificate is trusted for Windows Mobile 5 and Later platforms. Root = [email protected], CN=Thawte Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, S=Western Cape, C=ZA Testing certificate date to ensure validity Date Validation passed. The certificate is not expired. Additional Details Certificate is valid: NotBefore = 1/5/2009 4:00:00 PM, NotAfter = 1/11/2010 3:59:59 PM Testing Http Authentication Methods for URL https://mail.immersive.com.au/Microsoft-Server-Activesync/ Http Authentication Methods are correct Additional Details Found all expected authentication methods and no disallowed methods. Methods Found: Basic Attempting an Activesync session with server Errors were encountered while testing the ActiveSync session Test Steps Attempting to send OPTIONS command to server OPTIONS response was successfully received and is valid Additional Details Headers received: MicrosoftOfficeWebServer: 5.0_Pub Pragma: no-cache Public: OPTIONS, POST Allow: OPTIONS, POST MS-Server-ActiveSync: 6.5.7638.1 MS-ASProtocolVersions: 1.0,2.0,2.1,2.5 MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,ResolveRecipients,ValidateCert,Provision,Search,Notify,Ping Content-Length: 0 Date: Thu, 16 Jul 2009 01:07:27 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Attempting FolderSync command on ActiveSync session FolderSync command test failed Tell me more about this issue and how to resolve it Additional Details Exchange

    Read the article

  • Troubleshooting server performance with a hosted server?

    - by ProfessionalAmateur
    We are in a tough spot, we have a hosted server with the following specs: OS: Windows Server 2008 R2 Enterprise SP1 64bit Processor: Intel Xeon X7550 @ 2GHz (8 processors) RAM: 16GB The file system is on a SAN or NAS (not sure). We are seeing very odd issues where a user will open a 25MB .xslb file and it takes literally 60-120 seconds sometimes. The server is just dog slow for excel. Resources are not being pegged, CPU never jumps up, plenty of RAM... it's just oddly slow. Our host has been looking at the issue for several weeks with not much to show for it. Is there a utility I can run myself that will help trackdown our issue? I have found Server Performance Advisor V1.0 Any experience in using it? Our host is ultimately responsible for fixing this, but we are going on 1 month and our users are losing patience. Any tips would be helpful.

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >