Search Results

Search found 11568 results on 463 pages for 'config spec'.

Page 159/463 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Fedora 14 update probelem

    - by Marko
    How is everybody doin? :) Im having this problem with Fedora 14 update for last couple of weeks.. when I run yum update I get the following result: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: kernel-uname-r = 2.6.32.10-90.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-1:195.36.15-1.fc12.1.i686 kernel-uname-r = 2.6.32.16-150.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-1:195.36.31-1.fc12.2.i686 kernel-uname-r = 2.6.32.21-168.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-1:195.36.31-1.fc12.5.i686 kernel-uname-r = 2.6.32.10-90.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-1:195.36.15-1.fc12.1.i686 kernel-uname-r = 2.6.32.16-150.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-1:195.36.31-1.fc12.2.i686 kernel-uname-r = 2.6.32.21-168.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-1:195.36.31-1.fc12.5.i686 Please report this error in http://yum.baseurl.org/report ** Found 9 pre-existing rpmdb problem(s), 'yum check' output follows: VirtualBox-3.2-3.2.10_66523_fedora13-1.i686 has missing requires of libpython2.6.so.1.0 VirtualBox-3.2-3.2.10_66523_fedora13-1.i686 has missing requires of python(abi) = ('0', '2.6', None) 1:kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-195.36.15-1.fc12.1.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.10', '90.fc12.i686.PAE') 1:kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-195.36.31-1.fc12.2.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.16', '150.fc12.i686.PAE') 1:kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-195.36.31-1.fc12.5.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.21', '168.fc12.i686.PAE') mysql-workbench-gpl-5.2.28-1fc13.i386 has missing requires of libpython2.6.so.1.0 pysvn-1.7.2-1.fc13.i686 has missing requires of python(abi) = ('0', '2.6', None) system-config-display-2.2-1.fc12.i686 has missing requires of libpython2.6.so.1.0 system-config-display-2.2-1.fc12.i686 has missing requires of python(abi) = ('0', '2.6', None) does anybody have a similar issue?

    Read the article

  • Mount EC2 instance via SSH on Mac OS X

    - by darkporter
    OK I just can't figure this out. I have an EC2 instance, which I'm able to SSH into just fine with: ssh -i XXXX.pem [email protected] I can even make it slick from the command line by creating a ~/.ssh/config with this in it: Host XXXX HostName XXXX User ubuntu IdentityFile ~/.ec2/XXXX.pem Which allows me to simple do a ssh XXXX with no -i option. Now, I want to mount this via SSH. I've tried MacFuse/SSHFS, MacFusion and ExpandDrive, but no luck. It's supposed to "just work" but the SSH-related command line utilities and the Keychain Access program in OS X is confusing and opaque to me. From what I've read, these GUI programs don't care about .ssh/config, they care about the Keychain. Somehow I can associate my domain name I'm connecting to with a particular "identity" private key file (.pem file) but I have no idea how. I tried this: ssh-add -K XXXX.pem Which does add to the Keychain but it's not associated to a particular domain. These GUI mounting programs I mentioned all just spin and do nothing when I try to connect passwordless. No keychain prompt, no nothing. I've pretty much given up and I'm thinking about just setting up an SMB server, but I'd rather just go over SSH since I believe it's possible.

    Read the article

  • Apache Server Status page in port 8443

    - by batman
    I'm very new to apache. I tried to enable the server status page of apache. I added the status.conf and status.load to mods-enabled directory. I changed the config of apache2.conf to include all mods-enabled directory. This is the config of staus.conf: <IfModule mod_status.c> # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Uncomment and change the "192.0.2.0/24" to allow access from other hosts. # <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 ::1 # Allow from 192.0.2.0/24 </Location> # Keep track of extended status information for each request ExtendedStatus On # Determine if mod_status displays the first 63 characters of a request or # the last 63, assuming the request itself is greater than 63 chars. # Default: Off #SeeRequestTail On <IfModule mod_proxy.c> # Show Proxy LoadBalancer status in mod_status ProxyStatus On </IfModule> </IfModule> The default settings. I restarted my server. I'm redirecting all ports to 8443. Which in turn turns my requests to localhost:8443/server-status. Which does throw an 404 error. Are there any way to get around this? Thanks in advance.

    Read the article

  • Nginx order of servers

    - by scrat
    I have 3 sites on my server. All are running on gunicorn and use unix sockets to communicate with nginx which routes requests. I got three records in nginx.conf like: server { listen 80; server_name site1.com; location / { proxy_pass http://unix:/tmp/site1.sock; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } For site1, site2, site3. If they are ordered as config for site1 goes first, and then goes config for site2 and site3 everything works good. But when I change the order for example to site2, site1, site3, then site1 becomes routed to site2. What am I doing wrong? Full server nginx.conf before servers configs: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • Apache LDAP with local groups

    - by Greg Ogle
    I have a server that currently uses htpasswd to authenticate users. I'm migrating to using LDAP, but my LDAP server is only for user authentication, not allowing me to add groups. I still need to use groups as they are used for access control via the Apache Directory tags in my configuration. The alternative is to revisit the access control altogether, using php or something of the sort to limit access. this works for 'basic' authentication <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com </Directory> attempted <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue #groups file from previous configuration using htpasswd #tried to tweak to match new user format, but I don't think it looks up in here AuthGroupFile /misc/www/htpasswd/groups #added the group, which is how it works when using htpasswd Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com group xyz </Directory>

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Linux boot - stop the kernel switching to a new framebuffer mode clearing output

    - by Avio
    I'm working on an embedded system (based onUbuntu 12.04 LTS) and I'm customizing its kernel. I'm having some problem with upstart, mountall and plymouth. Nothing unsolvable I suppose, but the real problem is that I can't diagnose properly what's going on because the kernel (or maybe plymouth) changes the video mode in the middle of the boot process. This completely wipes entire lines of log and prevents any debugging of kernel misconfigurations. My Grub2 config seems to be ok with: GRUB_CMDLINE_LINUX="" GRUB_CMDLINE_LINUX_DEFAULT="acpi=force noplymouth" GRUB_GFXMODE=1024x768x32 GRUB_GFXPAYLOAD_LINUX=keep Here is some relevant output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) And here is the relevant portion of my kernel configuration: CONFIG_AGP=y CONFIG_AGP_INTEL=y CONFIG_VGA_ARB=y CONFIG_VGA_ARB_MAX_GPUS=16 CONFIG_DRM=y CONFIG_DRM_KMS_HELPER=y CONFIG_DRM_I915=y CONFIG_DRM_I915_KMS=y CONFIG_VIDEO_OUTPUT_CONTROL=y CONFIG_FB=y CONFIG_FB_BOOT_VESA_SUPPORT=y CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y CONFIG_FB_MODE_HELPERS=y CONFIG_FB_VESA=y CONFIG_BACKLIGHT_LCD_SUPPORT=y CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_VGA_CONSOLE=y CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=640 CONFIG_DUMMY_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y CONFIG_LOGO=y CONFIG_LOGO_LINUX_MONO=y CONFIG_LOGO_LINUX_VGA16=y CONFIG_LOGO_LINUX_CLUT224=y Every other custom/stock kernel boot fine with that Grub2 config. What I would like to have is a single flow of messages on a single console (retaining one screen resolution) from the bootup logo till the login prompt. Does anybody know what I have to tweak to achieve this?

    Read the article

  • Share one ssl certificate between multiples vhost

    - by Cesar
    I have a setup like this: <VirtualHost 192.168.1.104:80> ServerName domain1 DocumentRoot /home/domain/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> ServerName domain2 DocumentRoot /home/domain2/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> DocumentRoot /home/domain3/public_html ServerName domain3 ... </VirtualHost> <VirtualHost 192.168.1.104:443> ServerName domain3 SSLCertificateFile /usr/share/ssl/certs/certificate.crt SSLCertificateKeyFile /usr/share/ssl/private/private.key SSLCACertificateFile /usr/share/ssl/certs/bundle.cabundle ... </VirtualHost> I want to use domain3 certificate in the other domains, preferably without having to repeat all the <VirtualHost 192.168.1.104:443> config. In other words I want something like this: If the vhost has no explicit ssl config use cert for domain3 (/usr/share/ssl/certs/certificate.crt) Notes: 1.- I for sure will be setting more vhosts in the future 2.- I know (and don't care) of the ssl warnings the browser will show (hostname mismatch) If this possible? how?

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • openVPN as a way to connect to a LAN by another client, different from server

    - by Einar
    Setup: one LAN handled by a router without a publicly available IP address but without any outbound connection restrictions ("target LAN"); a separate server publicly reachable from the Internet ("gateway"). I am trying to set up openVPN so that a third client can connect to the "gateway" and access the "target LAN". As the router of "target LAN" is not reachable from the Internet directly, it connects to the gateway itself via openVPN as well. The problem is how to handle routing. The LAN router has two network interfaces (for the outside network and the LAN itself). In openVPN (the server on the gateway) I set client-to-client and push "route 192.168.10.0 255.255.255.0" but I assume this would be horribly wrong (it actually messed up the routing on the LAN router until I killed openVPN). openVPN is not using bridging, is configured via tun. Other config details from the server server 10.8.0.0 255.255.255.0 client-config-dir ccd route 192.168.10.0 255.255.255.0 And the client file in ccd is iroute 192.168.10.0 255.255.255.0 What can be adjusted to ensure that a third client can connect through openVPN and access the LAN mentioned earlier?

    Read the article

  • ASP.Net Session Timing Out Rapidly

    - by Zac
    We have an ASP.Net 3.5 website running on Windows Server 2008 with IIS7. The session timeout period for this site is configured to be 20 minutes - however, it is currently lasting for between 40 and 50 seconds. After researching the problem we investigated several configuration values which could be involved in the timeout period but none of them are set to less than 20 minutes. The areas we look are as follows: web.config system.web/sessionState element (20 minutes). web.config system.web/authentication/forms element (not present, defaults to 30 minutes). Sites/{website}/ASP/Session Properties/Time-out (20 minutes). Application Pools/{appPool}/Advanced Settings/Process Model/Idle Time-out (20 minutes). We've also noted that the CPU is staying around 0% and that RAM usage is flat-lining around 1.07 GB (of 8 GB available) - so there is no performance-based reason for IIS to be recycling the Application Pool as far as we can tell. Are there any settings we've overlooked which could cause the session timeouts to be expiring so quickly? EDIT A couple of additional points: This is not occurring in development, only on the server. The session is not sliding (i.e. if we refresh the page a few times it still times out approximately 40 - 50 seconds after the session was created.

    Read the article

  • Problems forwarding port 3306 on iptables with CentOS

    - by BoDiE2003
    Im trying to add a forward to the mysql server at 200.58.126.52 to allow the access from 200.58.125.39, and Im using the following rules (its my whole iptables of the VPS of my hosting). I can connect locally at the server that holds the mysql service as localhost, but not from outside. Can someone check if the following rules are fine? Thank you # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s 200.58.125.39 --dport 3306 -j ACCEPT -A INPUT -p tcp -s 200.58.125.39 --sport 1024:65535 -d localhost --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -s localhost --sport 3306 -d 200.58.125.39 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT And this is the output of the connection trial. [root@qwhosti /home/qwhosti/public_html/admin/config] # mysql -u user_db -p -h 200.58.126.52 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '200.58.126.52' (113)

    Read the article

  • How do I start the Workstation Service so I can use `net use`?

    - by nitefrog
    I have a Windows 7 machine that logs into a domain. The machine can net view and see the different shares, but when I try to use, net use * \\name\share, I get an error stating that the service is not started. Yet when I issue a net start, it states the service is already started. My other win7 machines work fine; they can see and use any of the shares. Is there a security setting that needs to be disabled or enabled? I really need to get this working, and I have no other ideas as the other machines have no problem accessing the shares on different systems. The error I am getting is , "The Workstation Service Has Not Been Started", but like I said other machines can connect fine, and when I issue a, "net start workstation", it states the service is already started. In addition the error number I am receiving is 2138. UPDATE: On the machine that is having issues. From the troubled machine if I issue a Net View \\name I can see all the shares on the machine I want to connect to. When I try to net use * \\name\sharename I get the error The Workstation service has not started. I have set both settings sc config lanmanworkstation start = auto and sc config lanmanserver start = auto on the Windows7 computer that is having issues. I have rebooted the computer and still no dice. I can net view any computer on the network and see all shares, but I cannot access any of the shares in which I can see. In the registry under the HKLM\System\CurrentControlSet\Services Both LanmanServer and LanmanWorkstation start is set to 2. Screen capture of net use and view: The Services: This is really weird. What am I missing? It has to be a security setting...

    Read the article

  • lighttpd on Fedora permission issues

    - by Isaac Gateno
    I'm trying to get started with lighttpd on Fedora 16 to run a RESTful api for development. Right now even with the most basic sample config file I'm getting 404 pages when I know the pages I'm pointing at exist. From reading other questions I'm leaning towards this being a permissions issue, but I'm confused about how lighttpd runs on Fedora. There's a user called "lighttpd" not "www-data"? I can't see this user in the system-config-users tool and I can't su into it to check which permissions it has. I'm trying to point lighttpd to "/var/www/lighttpd" which has some example pages in it. The permissions for the files inside are set to -rw-r--r-- and the permissions for the folder containing them are drwxr-xr-x. Doesn't that mean that any user can view these files? I'm not sure what else I should be checking as I don't have much experience with server configuration. Any help would be appreciated. Edit: I was following the tutorial configuration here so the lighttpd.conf file contains server.document-root = "/var/www/lighttpd/" server.port = 3000 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) and I was just trying to get the basic example page working.

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • Remove an apache alias subdirectory

    - by Hippyjim
    I'm using Apache 2 on Ubuntu 12.04. I added an alias for a subdirectory, to point to gitweb. I realised I should probably make it accessible only on https - so I removed the alias and restarted Apache. I can still navigate to http://xyz/gitweb - even with no alias in any of my config files. How do I remove it? EDIT The config file looked like this before: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Alias /gitweb/ /usr/share/gitweb/ <Directory /usr/share/gitweb/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgiDirectory Index gitweb.cgi </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> And this after: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • How do I access my webserver on my stationary from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • Server-side SSH jump hosts

    - by Dan Sosedoff
    Trying to figure out server side SSH jump hosts logic. Current network schema: [Client] <--> [Server A: hostname: a.com] <--> [Server B] [Client] <--> [Server A: hostname: b.com] <--> [Server C] Server A responds to both DNS records. Possible flow: Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically jump user onto Server B with ssh user2@server_b.com. Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically just user onto Server C with ssh user2@server_c.com. In other words, client should be able to connect to the target without performing any local configuration, assuming that we have a stock ssh config. The problem with ssh jumps is that user has to define hosts in local ~/.ssh/config file, which is not acceptable in my case. It needs to be a default sshd behavior. Im aware that you can define a custom command ~/.ssh/authorized_keys on server, but i dont think there is a way to properly detect source hostname where user tries to connect. It is possible at all ?

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Apache2.2 not responding or logging anything on Win 7

    - by Adam
    I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't time out as such, the browser just keeps on waiting forever. Nothing is recorded in either the error log (set to debug level), the access log, or Windows' Event Log. The problem showed up when I added a new VHost and restarted, however a syntax check has shown there's no problem with the config (from the little I changed), and the service does actually start error free. I've also disabled VHosts and tried with just localhost. I've tried to telnet to the web server, and it connects, but nothing happens. The prompt just goes blank and I can't type anything, and effectively become stuck. I've ensured there's a rule within Windows Firewall for Apache, and I've even disabled the entire thing just to check it wasn't the cause. Still the same. If I stop Apache however, the request fails immediately. I've uninstalled and reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've tried using a different port but nothing different. Does anybody have any suggestions to fix this? Or to perhaps try and figure out either if it's Apache itself not responding or something sitting between the two that's holding things up? I'm not too savvy on debugging Windows issues like this and I've been searching for hours but not found anything of use to me. Cheers Adam

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >