Search Results

Search found 10695 results on 428 pages for 'none none'.

Page 86/428 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Can't get simple Apache VHost up and running

    - by TK Kocheran
    Unfortunately, I can't seem to get a simple Apache VHost online. I used to simply have one VHost which bound to all: <VirtualHost *:80>, but this isn't appropriate for security anymore. I need to have one VHost for localhost requests (ie my dev server) and one for incoming requests via my domain name. Here's my new VHost: NameVirtualHost domain1.com <VirtualHost domain1.com:80> DocumentRoot /var/www ServerName domain1.com </VirtualHost> <VirtualHost domain2.com:80> DocumentRoot /var/www ServerName domain2.com </VirtualHost> After I restart my server, I see the following errors in my log: [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs What am I doing wrong? EDIT As per the answer give below, I have modified my configuration. Here are my configuration files: /etc/apache2/ports.conf: Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Here are my actual defined sites: /etc/apache2/sites-enabled/000-localhost: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerAdmin ######### DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 <Location /> <Limit GET POST PUT> order allow,deny allow from all deny from 65.34.248.110 deny from 69.122.239.3 deny from 58.218.199.147 deny from 65.34.248.110 </Limit> </Location> </VirtualHost> /etc/apache2/sites-enabled/001-rfkrocktk.dyndns.org: NameVirtualHost rfkrocktk.dyndns.org:80 <VirtualHost rfkrocktk.dyndns.org:80> DocumentRoot /var/www ServerName rfkrocktk.dyndns.org </VirtualHost> And, just for kicks, my main file: /etc/apache2/apache2.conf: # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ what else do I need to do to fix it? Should I be telling apache to listen on 127.0.0.1:80, or isn't it already listening there?

    Read the article

  • Server 2008 NAT Internet Not Working

    - by Jack
    I'm trying to set up Routing and Remote Access on Windows Server 2008 R2, I have a network connection that I want to share the internet from to another private network. The server has two NICs which are configured as follows: External NIC (Dynamically assigned by ISP) IP:10.175.4.150 Subnet:255.255.192.0 Gateway:10.175.0.1 DNS:10.175.0.1 Internal NIC IP:172.16.254.1 Subnet:255.255.255.0 Gateway:None DNS:None I have set the external NIC to be the public interface and enabled NAT on it in the RRAS MMC and set the internal NIC to be a private interface. I have also set up the DNS forwarding or whatever it is in the NAT section. From a client (IP:172.16.254.2) I can ping the server and access files on it, when I try to browse the web with the default gateway set to the internal NIC ip I end up getting a 404 page which is returned from the ISPs default gateway. I'm guessing it's something to do with the double NAT possibly. Trying to ping the ISPs default gateway from a private network client just times out as does accessing it directly. I've disabled and reconfigured RRAS multiple times and that doesn't seem to have made a difference, so can anyone tell me what I'm doing wrong? Thanks.

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • vhost configuration for owncloud

    - by Razer
    I'm using apache2 for hosting owncloud. I configured a vhost file for owncloud, but every time I go on the site my browser downloads a ruby file. Here is my vhost configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName http://rsserver.fritz.box DocumentRoot /var/www/owncloud/ <Directory /var/www/owncloud/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Apache error log tells me: [Sat Jun 16 20:46:04 2012] [error] [client xx.xx.xx.xx] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/owncloud/core/templates/403.php mod_rewrite is enabled. Where is the problem?

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • VMware Data Recovery error -3960 and Event ID 8193 on Windows Server 2003

    - by flooooo
    I've been trying to solve this problem since a few days now without any success. What I'm trying is to make a backup of a virtual machine running Windows Server 2003 SP 2 using VMware Data Recovery 2.0.0.1861. When starting the backup task it tries to make a snapshot of the virtual machine using VSS which fails with error: Event Type: Error Event Source: VSS Event Category: None Event ID: 8193 Date: 05.06.2012 Time: 12:12:01 User: N/A Computer: LEGOLAS Description: Volume Shadow Copy Service error: Unexpected error calling routine RegSaveKeyExW. hr = 0x800703f8. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 2d 20 43 6f 64 65 3a 20 - Code: 0008: 57 52 54 52 45 47 52 43 WRTREGRC 0010: 30 30 30 30 30 33 39 36 00000396 0018: 2d 20 43 61 6c 6c 3a 20 - Call: 0020: 57 52 54 52 45 47 52 43 WRTREGRC 0028: 30 30 30 30 30 33 31 38 00000318 0030: 2d 20 50 49 44 3a 20 20 - PID: 0038: 30 30 30 30 36 34 38 38 00006488 0040: 2d 20 54 49 44 3a 20 20 - TID: 0048: 30 30 30 30 34 33 38 34 00004384 0050: 2d 20 43 4d 44 3a 20 20 - CMD: 0058: 43 3a 5c 57 49 4e 44 4f C:\WINDO 0060: 57 53 5c 53 79 73 74 65 WS\Syste 0068: 6d 33 32 5c 76 73 73 76 m32\vssv 0070: 63 2e 65 78 65 20 20 20 c.exe 0078: 2d 20 55 73 65 72 3a 20 - User: 0080: 4e 54 20 41 55 54 48 4f NT AUTHO 0088: 52 49 54 59 5c 53 59 53 RITY\SYS 0090: 54 45 4d 20 20 20 20 20 TEM 0098: 2d 20 53 69 64 3a 20 20 - Sid: 00a0: 53 2d 31 2d 35 2d 31 38 S-1-5-18 This machine was converted p2v. I have no idea where to search for the problem and what to do. Google showed a few result but none of them were useful for me. Please help me. If you need further information I'll tell you - just ask!

    Read the article

  • Errors to do with modules when using net-snmp utils

    - by bob
    I was using the net-snmp packages that come with my linux distro (version 5.3.2.2), but wanted to do some work with the latest version of net-snmp (5.7), so tried compiling and installing the new source. It seemed to work ok but now I'm getting a load of errors when use net-snmp utils (snmpget, snmpset snmpwalk etc..) for example: $ snmptranslate -On SNMPv2-MIB::system.sysDescr MIB search path: /home/me/.snmp/mibs:/usr/local/share/snmp/mibs Cannot find module (SNMPv2-SMI) At line 6 in /usr/local/share/snmp/mibs/SNMPv2-MIB.txt Cannot find module (SNMPv2-TC): At line 9 in /usr/local/share/snmp/mibs/SNMPv2-MIB.txt Cannot find module (SNMPv2-MIB): At line 9 in (none) : <a lot of similar lines> : Cannot find module (NET-SNMP-VACM-MIB): At line 9 in (none) .1.3.6.1.2.1.1.1 From this I assumed perhaps that I was missing mibs from the 'MIB search path', so I looked at the first error 'Cannot find module (SNMPv2-SMI)', however it seems to be in the right directory: $ ls /usr/local/share/snmp/mibs/*SNMPv2-SMI* /usr/local/share/snmp/mibs/SNMPv2-SMI.txt And the same result for the other in the list.. so I'm wondering if anybody knows why it might not be finding the modules even though they seem to be in the search path?

    Read the article

  • Mac OSX and root login enabled

    - by reza
    All I am running OSX 10.6.8 I have enabled root login through Directory Utility. I have assigned a password. I get an error when I try to ssh root@localhost. ssh -v root@localhost OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/rrazavipour-lp/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /Users/rrazavipour-lp/.ssh/identity type -1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_rsa type 1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/rrazavipour-lp/.ssh/known_hosts:47 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_dsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/rrazavipour-lp/.ssh/identity debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). What I am doing wrong? I know I have the password correct.

    Read the article

  • ImportError: No module named _socket? WSGI Deployment into Apache

    - by Sxkaur
    I am using WSGI 3.3 for python 2.7.3 (32bit) for Apache 2.2. I got the binary WSGI from http://code.google.com/p/modwsgi/downloads/detail?name=mod_wsgi-win32-ap22py27-3.3.so. I have been trying to deploy an application but keep on receiving the ImportError: no module named _socket. I have included my wsgi and error logs. APACHE config: #LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule wsgi_module modules/mod_wsgi.so <Directory C:/Users/xxxxd/Documents/cahd> AllowOverride None Options None Order deny,allow Allow from all </Directory> WSGIScriptAlias / C:/Users/xxxxd/Documents/cahd/cahd/django.wsgi import os, sys sys.path.append('C:/Users/xxxxd/Documents) sys.path.append('C:/Users/xxxxd/Documents/cahd/') os.environ['DJANGO_SETTINGS_MODULE'] = 'cahd.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() The error was: [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1 ]File "C:/Users/xxxxd/Documents/cahd/django.wsgi", line 10, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import django.core.handlers.wsgi [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\core\\handlers\\wsgi.py", line 8, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from django import http [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\http\\__init__.py", line 11, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from urllib import urlencode, quote [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\urllib.py", line 26, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\socket.py", line 47, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import _socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] ImportError: No module named _socket

    Read the article

  • Running two different websites domains one one IP address

    - by Akshar Prabhu Desai
    Here is my apache configuration file. I have two domain names running on same ip but i want them to point to different webapps. But in this case both point to the one intended for e-yantra.org. If I copy paste akshar.co.in part before E-yantra.org both start pointing to akshar.co.in I have two A DNS entries (one per domain name) pointing to the same IP. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.e-yantra.org ServerAdmin [email protected] DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/ci/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/db2/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName www.akshar.co.in ServerAdmin [email protected] DocumentRoot /var/akshar.co.in <Directory /var/akshar.co.in/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • Apt pin and self hosted apt repo

    - by Hamish Downer
    We have our own apt/deb repository with a handful of packages where we want to control the version. Crucially this includes puppet, which can be sensitive to versions being different. I want our desktops to only get puppet from our repository, but also for people to be able to add their own PPAs, enable backports etc. The current problem we have is backports on Ubuntu Lucid. Some important lines from /etc/apt/sources.list: deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://deb.example.org/apt/ubuntu/lucid/ binary/ And in /etc/apt/preferences.d/puppet: Package: puppet puppet-common Pin: release a=binary Pin-Priority: 800 Package: puppet puppet-common Pin: release a=lucid-backports Pin-Priority: -10 Currently policy says: $ sudo apt-cache policy puppet puppet: Installed: (none) Candidate: (none) Package pin: 2.7.1-1ubuntu3.6~lucid1 Version table: 2.7.1-1ubuntu3.6~lucid1 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-backports/main Packages 100 /var/lib/dpkg/status 2.6.14-1puppetlabs1 -10 500 http://deb.example.org/apt/ubuntu/lucid/ binary/ Packages 0.25.4-2ubuntu6.8 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 0.25.4-2ubuntu6 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages If I use n= instead of a= then I get Package pin: (not found) I'm just plain confused at this point as to what I should use. Any help appreciated.

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.123... * connected * Connected to api.paypal.com (66.211.168.123) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • Failed to connect from slave to master with error "error connecting to master (1045)"

    - by Victor Lin
    I try to setup replication from slave to the master. CHANGE MASTER TO MASTER_HOST = 'master', MASTER_PORT = 3306, MASTER_USER = 'repl', MASTER_PASSWORD = 'xxx'; And I did grant privileges to the user on master. I can connect with mysql command from slave machine to the master mysql -h master -u repl -p mysql> show grants; GRANT RELOAD, SUPER, REPLICATION SLAVE, CREATE USER ON *.* TO 'repl'@'xxx' IDENTIFIED BY PASSWORD 'xxx' mysql> select 1; +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.04 sec) As you can see, privileges are correct, connection works fine, but however, the connection for replication to master always failed. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-replay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master:3306' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec) Is this caused by different version of MySQL server? The version of master is 5.0.77, and the slave is 5.5.13. But all articles I could find tell me that it's okay to replicate from a newer slave to old master. How to solve this problem? -- Update -- I even try to upgrade the old MySQL, but still, the problem is not solved. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000007 Read_Master_Log_Pos: 107 Relay_Log_File: slave-replay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec)

    Read the article

  • Tell postfix to merge three Authentication-Results:-Lines into one?

    - by Peter
    I am running a postfix mta with debian wheezy. I am using postfix-policyd-spf-python, openkdim and opendmarc. When receiving e-mails from google (google apps with own domain) for example, the header looks like this: [...] Authentication-Results: mail.xx.de; dkim=pass reason="1024-bit key; insecure key" header.d=yyy.com [email protected] header.b=OswLe0N+; dkim-adsp=pass; dkim-atps=neutral<br> [...] Authentication-Results: mail.xx.de; spf=pass (sender SPF authorized) smtp.mailfrom=yyy.com (client-ip=2a00:1450:400c:c00::242; helo=mail-wg0-x242.google.com; [email protected]; [email protected]) [...] Authentication-Results: mail.xx.de; dmarc=pass header.from=yyy.com<br> [...] This means any of these programs creates it's own Authentication-Results:-Line. Is it possible to tell postfix to merge this into one single Authentication-Results:-Line? When I send an e-mail to google, it says: [...] Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates xxx.xxx.xxx.xxx as permitted sender) [email protected]; dkim=pass [email protected]; dmarc=pass (p=NONE dis=NONE) header.from=xxx.com [...] And this is exactly what I want. Just one Authentication-Results-Header. How can I do this? Thanks. Regards, Peter

    Read the article

  • Dual boot windows 8 pro and windows 7 on XPS 8500 Special Edition

    - by Jesse
    I am trying to install a dual boot with windows 7 premium and windows 8 Pro on an XPS 8500 special edition. I created a new primary partition on my C: drive, inserted the windows 8 install disk, and rebooted my computer from DVD. I select custom install and the dialog box saying "Where do you want to install windows at?" pops up but none of my drives are listed. Please help me determine what is going on. I don't understand why none of my drives are showing up on this menu. Not even the original drive. When I go to load driver and click on the partition I created it tells me "No signed device drivers were found. Make sure the installation media contains the correct drivers, and then click OK." resolved above issue by running setup from the source folder on the install disk instead of booting from DVD. Was able to locate my new partition and start install. It completes the first step of "Copying windows files" just fine but then on the next step "Getting files ready for installation" my computer restarts and attempts to load windows 8 but keeps telling me my pc needs to restart. This keeps going on in an infinite boot loop. Please help, this has been a nightmare!

    Read the article

  • Online resizing of kvm guest root filesystem?

    - by Bittrance
    I have a Linux guest that uses an LVM volume directly as root file system (that is, there is no partition table). libvirt config looks thus: <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <kernel>/boot/vmlinuz-X.Y.Z.el6.x86_64</kernel> <initrd>/boot/initramfs-X.Y.Z.el6.x86_64.img</initrd> <cmdline>console=ttyS0 root=/dev/vda</cmdline> <boot dev='hd'/> </os> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/vg/guest'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> From inside the guest: $ mount /dev/vda on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Is it possible to resize the guest's root partition without rebooting the guest? Just doing lvextend on the host and resize2fs from the guest does not seem to be enough.

    Read the article

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

  • CentOS OpenVZ fail to boot after kernel update

    - by SkechBoy
    After upgrading to latest OpenVZ kernel CentOS server won't boot. When i try go boot the latest kernel server is stuck at this point: (note that images are taken from virtual kvm) http://i.stack.imgur.com/4lusz.jpg Then i try to start the server on some old kernels and than i get this error message: kernel panic - not syncing - attempted to kill init better shown on this image: http://i.stack.imgur.com/2SReF.jpg Here is some useful information fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2995.7 GB, 2995739688960 bytes 255 heads, 63 sectors/track, 364211 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c4e4 Device Boot Start End Blocks Id System /dev/sda1 1 523 4199044+ 82 Linux swap / Solaris /dev/sda2 524 785 2104515 83 Linux /dev/sda3 786 261869 2097157230 83 Linux /dev/sda4 261870 364211 822062115 83 Linux /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda1 none swap sw 0 0 /dev/sda2 /boot ext3 defaults 0 0 /dev/sda3 / ext3 defaults 0 0 /dev/sda4 /home ext3 defaults 0 0 and grub config file: title OpenVZ (2.6.18-274.18.1.el5.028stab098.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.18.1.el5.028stab098.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.18.1.el5.028stab098.1.img title OpenVZ (2.6.18-274.7.1.el5.028stab095.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.7.1.el5.028stab095.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.7.1.el5.028stab095.1.img title OpenVZ (2.6.18-194.8.1.el5.028stab070.4) root (hd0,1) kernel /vmlinuz-2.6.18-194.8.1.el5.028stab070.4 ro root=/dev/sda3 vga=0x317 initrd /initrd-2.6.18-194.8.1.el5.028stab070.4.img Any help is greatly appreciated Thanks.

    Read the article

  • How to create btrfs RAID-1 filesystem (assertion error in mkfs.btrfs)?

    - by amcnabb
    I tried to make a btrfs RAID-1 filesystem in "degraded mode" by following the btrfs UseCases instructions but hit a fatal assertion error. Why is this failing, and is there any workaround? The instructions I followed are at: https://btrfs.wiki.kernel.org/articles/u/s/e/UseCases_8bd8.html The output of the mkfs.btrfs and btrfs filesystem show commands is: # mkfs.btrfs -m raid1 -d raid1 /dev/sdd1 /dev/loop1 WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL WARNING! - see http://btrfs.wiki.kernel.org before using failed to read /dev/sr0 adding device /dev/loop1 id 2 mkfs.btrfs: volumes.c:802: btrfs_alloc_chunk: Assertion `!(ret)' failed. zsh: abort (core dumped) mkfs.btrfs -m raid1 -d raid1 /dev/sdd1 /dev/loop1 # btrfs filesystem show failed to read /dev/sr0 Label: none uuid: 773908b8-acca-4c30-85c5-6642b06de22b Total devices 1 FS bytes used 28.00KB devid 1 size 223.13GB used 2.04GB path /dev/sda5 Label: none uuid: 0f06f1a8-5f5f-4b92-a55c-b827bcbcc840 Total devices 2 FS bytes used 24.00KB devid 2 size 2.00GB used 0.00 path /dev/loop1 devid 1 size 1.36TB used 20.00MB path /dev/sdd1 Btrfs Btrfs v0.19 # EDIT: It turns out that the filesystem isn't mountable: # mount /dev/sdd1 /mnt/big2 mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # So, why did the mkfs fail, and is there any workaround?

    Read the article

  • bond0 and xen = crash

    - by Rajat
    Bonding with xen 1 - Stop all guests. Reboot dom0 after running "chkconfig xend off" and "chkconfig xendomains off". 2 - Configure bond0 by enslaving eth0 and eth1 to it. I added the below two entries to /etc/modprobe.conf. alias bond0 bonding options bond0 mode=6,miimon=100 Content of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 IPADDR= NETMASK= ONBOOT=yes BOOTPROTO=static USERCTL=no Did "modprobe bond0" and "service network restart" after that. 3 - Edit /etc/xen/xend-config.sxp Change (network-script network-bridge) To (network-script 'network-bridge netdev=bond0') 4 - Start xend. "service xend start". 5 - chkconfig xend on. 6 - modprode bond0 7 - more /proc/net/bonding/bond0 8 - Create guest images as usual and bridge it to xenbr0. about config i did for my xen kernel rhel 5.3 after i reboot the host server i get in place bond0 get pbond0 and its get disconnect from network only i ping to my vm's on the host server any one have any idea why xen bond0 is acting like that or what is solutions to come out of pbond0 to bond0.

    Read the article

  • Running WordPress and Ghost on Apache with mod_proxy

    - by Jack Perry
    I currently have three WordPress sites hosted on Apache with virtual host files to direct the right domain to the right DocumentRoot. Ghost (node.js) just came out and I've wanted to tinker with it and just play around on one of my spare domains. I'm not really interested in moving over to nginx so I'm trying to get Ghost working on Apache via mod_proxy. I've managed to get Ghost working on my spare domain, but I think there's a problem with my virtual host files, as all of my other domains start pointing to Ghost as well. Here are two virtual host files, one for my main WordPress site that works fine, and the second for Ghost. Domains removed and replaced with DOMAIN and DOMAIN2. DOMAIN <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName DOMAIN.com ServerAlias www.DOMAIN.com DocumentRoot /var/www/DOMAIN.com/public_html <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/DOMAIN.com/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> DOMAIN2 <VirtualHost IP:80> ServerAdmin EMAIL ServerName DOMAIN2.com ServerAlias www.DOMAIN2.com ProxyPreserveHost on ProxyPass / http://IP:2368/ </VirtualHost> I get the feeling I'm not working with virtual hosts or mod_proxy right, and Google-fu has let me down after many suggested attempts. Any ideas? Thanks!

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >