Search Results

Search found 17625 results on 705 pages for 'techno log'.

Page 132/705 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Configure nginx for multiple node.js apps with own domains

    - by udo
    I have a node webapp up and running with my nginx on debian squeeze. Now I want to add another one with an own domain but when I do so, only the first app is served and even if I go to the second domain I simply get redirected to the first webapp. Hope you see what I did wrong here: example1.conf: upstream example1.com { server 127.0.0.1:3000; } server { listen 80; server_name www.example1.com; rewrite ^/(.*) http://example1.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example1.com; access_log /var/log/nginx/example1.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example1.com; proxy_redirect off; } } example2.conf: upstream example2.com { server 127.0.0.1:1111; } server { listen 80; server_name www.example2.com; rewrite ^/(.*) http://example2.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example2.com; access_log /var/log/nginx/example2.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example2.com; proxy_redirect off; } } curl simply does this: zazzl:Desktop udo$ curl -I http://example2.com/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.2 Date: Sat, 04 Aug 2012 13:46:30 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http://example1.com/ Thanks :)

    Read the article

  • Cannot get mod_rewrite to work on Mac OSX Mountain Lion

    - by Joel Joel Binks
    I have tried everything I can think of and it still doesn't work. I am trying to get the example code from Larry Ullman's Advanced PHP book to work. His instructions were a bit lacking so I had to do some research. Here is what I have configured: username.conf <Directory "/Users/me/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> httpd.conf LoadModule rewrite_module libexec/apache2/mod_rewrite.so DocumentRoot "/Users/me/Sites" <Directory /> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> <Directory "Users/me/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> .htaccess <IfModule mod_rewrite.so> RewriteEngine on RewriteBase /phplearning/ADVANCED/ch02/ # Redirect certain paths to index.php: RewriteRule ^(about|contact|this|that|search)/?$ index.php?p=$1 RewriteLog "/var/log/apache/rewrite.log" RewriteLogLevel 2 </IfModule> Nothing has worked and it won't even log to the rewrite.log file. What have I done wrong? FYI even when I set up an extremely simple rule or use the root as the rewrite base, it still fails. I have also verified the mod_rewrite module is running. I am really angry.

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Problems with Vista loading a temporary user profile.

    - by Joe
    I'm having a problem in Vista. My machine has four users, one for each of us in the house. Whenever a user logs in before me, they log out, and then I log in, Vista loads a temporary profile for me. However, if I restart and log in, I get into my profile no problem. Two errors are written to the event log (see below), and I've searched everywhere for solutions. 1: Windows was unable to load the registry. The problem is often caused by insuff. memory or insuff. security rights. DETAIL - The process cannot access the file because it is being used by another process. for C:\users\joe\ntuser.dat I've got plenty of disk space and memory. 2:Windows cannot load the locally stored profile. Possible causes of this error include isufficient security rights or a corrupt local profile. DETAIL - The process cannot access the file because it is being used by another process. Thanks!

    Read the article

  • Nginx proxy domain to another domain with no change URL

    - by Evgeniy
    My question is in the subj. I have a one domain, that's nginx's config of it: server { listen 80; server_name connect3.domain.ru www.connect3.domain.ru; access_log /var/log/nginx/connect3.domain.ru.access.log; error_log /var/log/nginx/connect3.domain.ru.error.log; root /home/httpd/vhosts/html; index index.html index.htm index.php; location ~* \.(avi|bin|bmp|css|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|js|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|pps|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /home/httpd/vhosts/html; access_log off; expires 1d; } location ~ /\.(git|ht|svn) { deny all; } location / { #rewrite ^ http://connect2.domain.ru/; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_hide_header "Cache-Control"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; proxy_hide_header "Pragma"; add_header Pragma "no-cache"; expires -1; add_header Last-Modified $sent_http_Expires; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } I need to proxy connect3.domain.ru host to connect2.domain.ru, but with no URL changed in browser's address bars. My commented out rewrite line could solve this problem, but it's just a rewrite, so I cannot stay with the same URL. I know that this question is easy, but please help. Thank you.

    Read the article

  • OpenVPN connected but not internet access on the client

    - by Stefan
    I've setup OpenVPN following this tutorial, and everything works fine except that I don't have an internet connection on the client while connected to VPN. http://www.howtoforge.com/internet-and-lan-over-vpn-using-openvpn-linux-server-windows-linux-clients-works-for-gaming-and-through-firewalls My VPS server config is as follows (Ubuntu): dev tun proto udp port 1194 ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun status /var/log/openvpn-status.log verb 3 client-to-client push "redirect-gateway local def1" #set the dns servers push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" log-append /var/log/openvpn comp-lzo plugin /usr/lib/openvpn/openvpn-auth-pam.so common-auth My client config is as follows (Windows 7): dev tun client proto udp remote XXX.XXX.XXX.XXX 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert stefan.crt key stefan.key comp-lzo verb 3 auth-user-pass redirect-gateway local def1 I've turned off the firewall on the server for testing purposes (it doesn't help), and tried both wired and wireless connecting on the client. I've tried many Google results... but nothing seems to help. Can you help me? Thanks so far...

    Read the article

  • Running two different websites domains one one IP address

    - by Akshar Prabhu Desai
    Here is my apache configuration file. I have two domain names running on same ip but i want them to point to different webapps. But in this case both point to the one intended for e-yantra.org. If I copy paste akshar.co.in part before E-yantra.org both start pointing to akshar.co.in I have two A DNS entries (one per domain name) pointing to the same IP. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.e-yantra.org ServerAdmin [email protected] DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/ci/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/db2/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName www.akshar.co.in ServerAdmin [email protected] DocumentRoot /var/akshar.co.in <Directory /var/akshar.co.in/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>

    Read the article

  • Virtualhost setup for Ruby on Rails application (mod passenger)

    - by Ingo86
    Hi all, I'm trying to install Redmine under apache. The apache server works on a local network. My apache setup consist on a single virtual host. I can get insto different directories using simply the path corresponding: http://ip_address/folder_of_the_project_1 How can I setup the virtualhost to make redmine works in this situation? Here is my current virtualhost setup: NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/ RailsBaseURI /redmine <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/www/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Thank you, Ingo86

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Apache server configuration name resolution (virtual host naming + security)

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • Why is apache serving the default?

    - by Matt
    I keep adding more vhosts and enabling them but all the sites always do to the default vhost in sites-available here is what the default kind of looks like with me only changing the ip for security reasons <VirtualHost 167.889.88.88:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> and here is my other which i named some-site.net <VirtualHost *:80> ServerName some-site.net DocumentRoot "/var/www/vhosts/somesite.com/http/" <Directory "/var/www/vhosts/somesite.com/http/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> and it turned on my this command sudo a2ensite some-site.net Enabling site some-site.net. Run '/etc/init.d/apache2 reload' to activate new configuration! then i reloaded /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit the url some-site.net i get the index page that is for the default vhost...what am i doing wrong

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • CryptSvc not matched by Windows 7 Firewall rule

    - by theultramage
    I am using Windows Firewall in conjunction with a third-party tool to get notified about new outbound connection attempts (Windows Firewall Notifier or Windows Firewall Control). The way these tools do it is by setting the firewall to deny by default, and to add an auditing policy to log blocked connections into the Security event log. Then they watch the log, and display notification about newly added entries. netsh advfirewall set allprofiles firewallpolicy blockinbound,blockoutbound auditpol /set /subcategory:{0CCE9226-69AE-11D9-BED3-505054503030} /failure:enable With this configuration in place, I now need to craft outbound allow rules for applications and system services. Here is the rule for CryptSvc, the service frequently used for certificate validation and revocation checking: netsh advfirewall firewall add rule name="Windows Cryptographic Services" action=allow enable=yes profile=any program="%SystemRoot%\system32\svchost.exe" service="CryptSvc" dir=out protocol=tcp remoteport=80,443 The problem is, this rule does not work. Unless I change the scope to "all programs and services" (which is really unhealthy), connection denied events like the following will keep appearing in the security log: Event 5157, Microsoft Windows security auditing. The Windows Filtering Platform has blocked a connection. Application Information: Process ID: 1476 (<- svchost.exe with CryptSvc and nothing else) Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Direction: Outbound Source Address: 192.168.0.1 Source Port: 49616 Destination Address: 2.16.52.16 Destination Port: 80 Protocol: 6 (<- TCP) To make sure it's CryptSvc, I have let the connection through and reviewed its traffic; I also configured CryptSvc to run in its own svchost instance to make it more obvious: ;sc config CryptSvc type= share sc config CryptSvc type= own So... why is it not matching the firewall rule, and how to fix that?

    Read the article

  • Win2k8R2 / IIS 7.5 - users getting 503 response, no 503 error reported in logs

    - by merk
    I've got 2 web servers with mirrored content. There's a load balancer sitting in front of them. Starting yesterday we've been getting people complaining about 503 errors. i can't find any 503 errors in the IIS log file. However the server host is saying these errors are due to .Net errors in our website which are causing the app pool to recycle. They pointed out several errors in the windows application event log which look like this: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 3/31/2012 8:35:37 PM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: 6251.local Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 3/31/2012 8:35:37 PM Event time (UTC): 4/1/2012 1:35:37 AM Event ID: e7a580c7b38545cca3416a8595408f24 Event sequence: 97 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2/ROOT-1-129777167518960645 Trust level: Full Application Virtual Path: / Application Path: C:\inetpub\wwwroot\mywebsite\ Machine name: 6252 Process information: Process ID: 20000 Process name: w3wp.exe Account name: IIS APPPOOL\MyAppPool In particular they are saying that the account name under Process Information indicates that the app pool is recycling. They said if the app pool were not recycle, the accountname would instead be the folder where the website files are located. I checked the app pool settings - it's set to recycle every 29 hours. And the rapid fail protection is set to the default of 5 failures in 5 minutes. But i have not seen 5 failures in the event log in that short of a time span. Can anyone help me confirm if the 503 responses are indeed being generated by the app pools recycling? Or are these errors coming from somewhere else? My guess at the time was their load balancer was the one actually returning the 503 error. But that was just a guess.

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • "Password Server: Stopped" on Mac OS Lion Server. Stops with error -1 during startup

    - by V1ru8
    Since I've restored the Open Directory from an archive because my Server crashed and the DB was corrupt. The password server does not start anymore. The log looks like this: Feb 14 2012 21:41:20 156746us Mac OS X Password Service version 376.1 (pid = 2438) was started at: Tue Feb 14 21:41:20 2012. Feb 14 2012 21:41:20 156801us RunAppThread Created Feb 14 2012 21:41:20 156852us RunAppThread Started Feb 14 2012 21:41:20 156879us Initializing Server Globals ... Feb 14 2012 21:41:20 163094us Initializing Networking ... Feb 14 2012 21:41:20 163196us Initializing TCP ... Feb 14 2012 21:41:20 191790us SASL is using realm "SERVER.HOME.POST-NET.CH" Feb 14 2012 21:41:20 191847us Starting Central Thread ... Feb 14 2012 21:41:20 191860us Starting other server processes ... Feb 14 2012 21:41:20 191873us StartCentralThreads: 1 threads to stop Feb 14 2012 21:41:20 191905us Initializing TCP ... Feb 14 2012 21:41:20 191954us Starting TCP/IP Listener on ethernet interface, port 106 Feb 14 2012 21:41:20 192012us Starting TCP/IP Listener on ethernet interface, port 3659 Feb 14 2012 21:41:20 192048us Starting TCP/IP Listener on interface lo0, port 106 Feb 14 2012 21:41:20 192082us Starting TCP/IP Listener on interface lo0, port 3659 Feb 14 2012 21:41:20 192117us StartCentralThreads: Created 4 TCP/IP Connection Listeners Feb 14 2012 21:41:20 192132us Starting UNIX domain socket listener /var/run/passwordserver Feb 14 2012 21:41:20 193034us CRunAppThread::StartUp: caught error -1. Feb 14 2012 21:41:20 193056us ** ERROR: The Server received an error during startup. See error log for details. Feb 14 2012 21:41:20 193075us RunAppThread::StartUp() returned: 4294967295 Feb 14 2012 21:41:20 193107us Stopping server processes ... Feb 14 2012 21:41:20 193119us Stopping Network Processes ... Feb 14 2012 21:41:20 193131us Deinitializing networking ... Feb 14 2012 21:41:20 193149us Server Processes Stopped ... Feb 14 2012 21:41:20 193165us RunAppThread Stopped Feb 14 2012 21:41:20 193202us Aborting Password Service. See error log. The error log repeats the following: Feb 14 2012 21:41:50 409022us Server received error -1 during startup. Feb 14 2012 21:41:50 409141us Aborting Password Service. Anyone an idea what's wrong here and how I can fix this?

    Read the article

  • added ip-based virtual host to sites-available and created symlink to sites-enabled...but new domain

    - by lililili
    I added ip-based virtual host to sites-availble and created symlink to sites-enabled, but new domain times out. When i navigate to mynewdomain.com it says connection timed out. NameVirtualHost 12.12.12.12 <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost ServerName newdomain.com DocumentRoot /var/www/newdomain.com <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • nginx 2 symfony2 web application, one ip no domain

    - by Krzysztof Koch
    I have irritating with nginx. I set up in /usr/share/nginx/www/firstapp one application and in /usr/share/nginx/www/secondapp. in my default conf i setup that in / root localization i want first app: when write 9.9.9.9 in browser show me first app, and when i write 9.9.9.9/makeup, there not show me seccond app. Why first app displays me good, and seccondapp cannot? Please help me. Sorry for quality here pasterbin code: enter link description here server { listen 80; server_name localhost; root /usr/share/nginx/www/firstapp/web; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log error; # strip app.php/ prefix if it is present rewrite ^/app\.php/?(.*)$ /$1 permanent; location / { root /usr/share/nginx/www/firstapp/web/; index app.php; try_files $uri @rewriteapp; } location /makeup/ { alias /usr/share/nginx/www/seccondapp/web/; index app.php; try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ ^/(app|app_dev)\.php(/|$) { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/lib/php5-fpm/www.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; #fastcgi_param SERVER_PORT 80; }

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >