Search Results

Search found 390 results on 16 pages for 'upstream'.

Page 8/16 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to disable SSLCompression on Apache httpd 2.2.15?

    - by Stefan Lasiewski
    I read about the CRIME attack against TLS Compression (CRIME is a successor to the BEAST attack against ssl & tls), and I want to protect my webservers against this attack by disabling SSL Compression, which was added to Apache 2.2.22 (See Bug 53219). I am running Scientific Linux 6.1, which ships with httpd-2.2.15. Security fixes for upstream versions of httpd 2.2 should be backported to this version. # rpm -q httpd httpd-2.2.15-15.sl6.1.x86_64 # httpd -V Server version: Apache/2.2.15 (Unix) Server built: Feb 14 2012 09:47:14 Server's Module Magic Number: 20051115:24 Server loaded: APR 1.3.9, APR-Util 1.3.9 Compiled using: APR 1.3.9, APR-Util 1.3.9 I tried SSLCompression off in my configuration, but that results in the following error message: # /etc/init.d/httpd restart Stopping httpd: [ OK ] Starting httpd: Syntax error on line 147 of /etc/httpd/httpd.conf: Invalid command 'SSLCompression', perhaps misspelled or defined by a module not included in the server configuration [FAILED] Is it possible to disable SSLCompression with this version of Apache Webserver?

    Read the article

  • WSUS data store error

    - by Kalenus
    I have one upstream server and 17 downstream (configured as replicas). I'm getting this error in one of them and I'm stuck: "An error occurred with the server's data store". Details: SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at Microsoft.UpdateServices.DatabaseAccess.DBConnection.DrainObsoleteConnections(SqlException e) at Microsoft.UpdateServices.DatabaseAccess.DBConnection.ExecuteReader() at Microsoft.UpdateServices.Internal.DataAccess.HideUpdatesForReplicaSync(String xmlUpdateIds) at Microsoft.UpdateServices.ServerSync.CatalogSyncAgentCore.ProcessHiddenUpdates(Guid[] hiddenUpdates) at Microsoft.UpdateServices.ServerSync.CatalogSyncAgentCore.ReplicaSync() at Microsoft.UpdateServices.ServerSync.CatalogSyncAgentCore.ExecuteSyncProtocol(Boolean allowRedirect) I already tried using the cleanup wizard to no avail.

    Read the article

  • Nginx Fastcgi performance issues

    - by Barry
    I am running several fastcgi servers behind Nginx. I run 3 Nginx workers and 6 fastcgi servers as upstream backends. When I run load tests of 1 req per sec, I can clearly see that on avarage a reply is 0.1sec, but from time to time there are 3.1 sec responses. It is a suspiciously deterministic number and it happens even at very small loads from time to time. Both CPU and Memory are of no issue at all. Any idea where this delay may be comming from? Any suggestions how to debug this? Many thanks, Barry.

    Read the article

  • Enable basic auth sitewide and disabling it for subpages?

    - by piquadrat
    I have a relatively straight forward config: upstream appserver-1 { server unix:/var/www/example.com/app/tmp/gunicorn.sock fail_timeout=0; } server { listen 80; server_name example.com; location / { proxy_pass http://appserver-1; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /path/to/htpasswd; } location /api/ { auth_basic off; } } The goal is to use basic auth on the whole website, except on the /api/ subtree. While it does work with respect to basic auth, other directives like proxy_pass are not in effect on /api/ as well. Is it possible to just disable basic auth while retaining the other directives without copy&pasting everything?

    Read the article

  • Our wi-fi at work is ridiculously slow, will adding more range extenders improve it?

    - by john
    At work, we have two wireless networks (e.g., Work1 Work2); the Work2 is used downstairs and Work1 is used upstairs. However, both are notoriously slow. The connection is better when we are wired in, but unfortunately due to our building being very old and our company growing very fast, most employees are not seated near the walls where the ethernet cables are. I had Cox, our ISP, run a bandwidth utilization test and it doesn't seem like we are capping out on upstream/downstream, which leads me to believe that it's strictly an issue with the wireless networks (which were implemented before I got there). The wireless networks are both Apple Airport Extremes. Is there anything I can do to improve the situation for everyone? Speeds are extremely slow, and sometimes drops out.

    Read the article

  • Nginx virtual server only partially working with Java application

    - by MFB
    Final Cut Server is a Java application (made by Apple) which launches from a web page. I have Nginx in front of this web server (amongst others) and, whilst the web server can be browsed externally, the Java app fails to launch correctly and throws the errors below. Can anyone offer clues as to what additional config I many have to add to Nginx to get this working? My existing Nginx config: user xxxx; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/conf/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { server_name _; return 444; } upstream fcs-site { server 10.10.5.20:8080; } server { listen 80; server_name example.com 10.10.5.90; access_log /var/log/nginx/fcs_access.log; error_log /var/log/nginx/fcs_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://fcs-site; proxy_redirect off; } } upstream myapp-site { server 127.0.0.1:6543; } server { listen 80; server_name otherexample.com www.otherexample.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; ssl on; ssl_certificate /etc/ssl/otherapp.crt; ssl_certificate_key /etc/ssl/otherapp.key; server_name otherexample.com www.otherexample.com; access_log /var/log/nginx/otherapp_access.log; error_log /var/log/nginx/other_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://myapp-site; proxy_redirect off; } location /static { root /www; expires 30d; add_header Cache-Control public; access_log off; } } Java errors: com.sun.deploy.net.FailedDownloadException: Unable to load resource: http://example.com:8080/FinalCutServer/FinalCutServer_mac.jnlp at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722) java.net.ConnectException: Operation timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) at sun.net.www.http.HttpClient.New(HttpClient.java:290) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doGetRequest(Unknown Source) at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722)

    Read the article

  • Nginx .zip files return 404

    - by Kenley Tomlin
    I have set up Nginx as a reverse proxy for Node and to serve my static files and user uploaded images. Everything is working beautifully except that I can't understand why Nginx can't find my .zip files. Here is my nginx.conf. user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; proxy_cache_path /var/www/web_cache levels=1:2 keys_zone=ooparoopaweb_cache:8m max_size=1000m inactive=600m; sendfile on; upstream *******_node { server 172.27.198.66:8888 max_fails=3 fail_timeout=20s; #fair weight_mode=idle no_rr } upstream ******_json_node { server 172.27.176.57:3300 max_fails=3 fail_timeout=20s; } server { #REDIRECT ALL HTTP REQUESTS FOR FRONT-END SITE TO HTTPS listen 80; server_name *******.com www.******.com; return 301 https://$host$request_uri; } server { #MOBILE APPLICATION PROXY TO NODE JSON listen 3300 ssl; ssl_certificate /*****/*******/json_ssl/server.crt; ssl_certificate_key /*****/******/json_ssl/server.key; server_name json.*******.com; location / { proxy_pass http://******_json_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { #******.COM FRONT-END SITE PROXY TO NODE WEB SERVER listen 443 ssl; ssl_certificate /***/***/web_ssl/********.crt; ssl_certificate_key /****/*****/web_ssl/myserver.key; server_name mydomain.com www.mydomain.com; add_header Strict-Transport-Security max-age=500; location / { gzip on; gzip_types text/html text/css application/json application/x-javascript; proxy_pass http://mydomain_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { #ADMIN SITE PROXY TO NODE BACK-END listen 80; server_name admin.mydomain.com; location / { proxy_pass http://mydomain_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { # SERVES STATIC FILES listen 80; listen 443 ssl; ssl_certificate /**/*****/server.crt; ssl_certificate_key /****/******/server.key; server_name static.domain.com; access_log static.domain.access.log; root /var/www/mystatic/; location ~*\.(jpeg|jpg|png|ico)$ { gzip on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype image/png image/jpeg application/zip; expires 10d; add_header Cache-Control public; } location ~*\.zip { #internal; add_header Content-Type "application/zip"; add_header Content-Disposition "attachment; filename=gamezip.zip"; } } } include tcp.conf; Tcp.conf contains settings that allow Nginx to proxy websockets. I don't believe anything contained within it is relevant to this question. I also want to add that I want the zip files to be a forced download.

    Read the article

  • OpenDNS servers initial response is very slow

    - by Ben Collins
    I've got a Time Warner cable ISP package (RoadRunner), and the modem they gave me doesn't allow me to specify which DNS servers to use; it always uses whatever the upstream dhcp server gives it. I prefer to use OpenDNS on my home network, so i've configured a couple of my PCs manually in the Windows adapter settings for IPv4 such that their IP addresses are obtained via DHCP, but the DNS server settings are fixed to the OpenDNS server IPs. Now, when I startup Windows on these PCs, it always takes 2-3 minutes to start receiving responses from the DNS servers; any request before that times out. While not debilitating, this is quite annoying. Any ideas why this might be happening?

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • How to disable proxy cache when query string is empty?

    - by chx
    With nginx I have server { listen 1.2.3.4:80 proxy_cache_valid 200 302 5m; location / { try_files $uri @upstream; root $root; } } When I go http://example.com/foobar it generates a redirect to http://example.com/foobar?filter_distance=50&... which is visitor dependent so I would like to not cache this redirect. I need to bypass cache when the query string is empty. I am a bit lost because location /foobar will match both.

    Read the article

  • nginx is not using gzip to talk to backend servers

    - by Michael Gorsuch
    Our web servers are running IIS 7 and are configured to compress dynamic and static content. When I hit these servers directly, gzip compression works. I recently placed nginx in front of them, and gzip compression has stopped. I was able to work around this by explicitly enabling gzip compression on nginx itself, but that seems a little inefficient considering I have half a dozen backends and only one active nginx box. It appears that nginx is stripping out the Accept-Encoding header. Does anyone have any advice for how to 'correct' this behavior? A sample configuration: upstream backend { server 127.0.0.1:8080; } server { listen 80; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://backend; } }

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • What permissions / ownership to set on PHP Sessions Folder when running FastCGI / PHP-FPM (as user "nobody")?

    - by Professor Frink
    I'm having trouble getting a number of scripts running because PHP-FPM can't write to my session folder: "2009/10/01 23:54:07 [error] 17830#0: *24 FastCGI sent in stderr: "PHP Warning: Unknown: open(/var/lib/php/session/sess_cskfq4godj4ka2a637i5lq41o5, O_RDWR) failed: Permission denied (13) in Unknown on line 0 PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0" while reading upstream" Obviously this is a permission issue; my session folder's owner/group is the webserver's user, NGINX. PHP-FPM runs as nobody though, and hence adding it to the nginx group is not so trivial. A temporary solution is to set the permissions of /var/lib/php/session to 777 - I have a feeling that's not the "best practice" though. What is the best practice when you need to assign a daemon write access to a folder, but it is running as nobody ?

    Read the article

  • Packaging python software with custom dependencies

    - by viraptor
    Hi, I'm looking for a good way to package a Python application that is going to be deployed on a Debian server. The application itself depends on some modules which are not included in base Debian repository, although they might be in the future. This creates some problems... I depend on some patches to those modules. If the original module gets installed one day, the application will break. However if I install everything I need in a virtualenv just for that application, I lose the ability to upgrade Python itself (in case of security updates). The third option would be to rename my fork of the upstream module and just treat it as a completely separate one. But that would mean changing the code (not much work, but it wouldn't be that clean / universal anymore). Are there any other options that I missed? Are there any pros / cons I didn't see in the solutions above?

    Read the article

  • Nodejs for processing js and Nginx for handling everything else

    - by Kevin Parker
    I am having a nodejs running on port 8000 and nginx on port 80 on same server. I want Nginx to handle all the requests(image,css,etc) and forward js requests to nodejs server on port 8000. Is it possible to achieve this. i have configured nginx as reverse proxy but its forwarding every request to nodejs but i want nginx to process all except js. nginx/sites-enabled/default/ upstream nodejs { server localhost:8000; #nodejs } location / { proxy_pass http://192.168.2.21:8000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }

    Read the article

  • CentOS 6 - YUM Local Repo - Ensure consistent package distribution

    - by Mike Purcell
    I've read a few guides outlining how to setup a local YUM repo, but none of them explicitly stated an answer to my question; If I set up a local YUM repo, does that mean that any CentOS servers which pull from said repo will never be "ahead" of the local YUM repo? I want to ensure a consistent package distribution across all my servers. Right now, when I do a yum update, even on a daily basis, the servers can be out of alignment. For example if I run YUM update on my dev server in the morning, then run YUM update on one of my production servers in the afternoon, the production server may have picked up a new version of a package that the dev server did not pick up, due to the time window between the update commands. Rather, I'd prefer that I run yum update from my dev server which has access to remote upstream yum repos, then let it sit for 2 weeks, after which I run yum update on my production servers against the local repo on my dev server.

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • Advice for migrating email server

    - by Chris Adams
    Hi there, I'm planning to migrate a Zimbra server with about 200gb of data from a server hosted in an office into a datacentre, to increase uptime (we've had a couple of outages when our network here started flaking out, and we have people in other countries relying on this server too). However, I'm not sure how best to migrate the data into the data centre without rendering the connection unusable during office hours, because there's far too much to send in over night over the two meg upstream connection we have here. I'm familiar with using tools like nice to stop a long running process degrading machine performance - is there a simple way to throttle a connection between office hours, so the long running transfer doesn't block the pipe, but then opens up outside of office hours to make the most of the bandwidth? I'm aware the alternative here is to simply mail a hard drive to the data centre, but I'd like to avoid doing that if I could. We're using Centos Linux for our servers, in the office and the datacentre, so extra points for an open source linux answer.

    Read the article

  • Nginx fastcgi split path info with mailman

    - by eyadof
    i'm using mailman with nginx to get its web interface this my nginx config : location /cgi-bin/mailman { root /usr/lib/; fastcgi_split_path_info (/cgi-bin/mailman[^/]*)/(.*)$; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $fastcgi_path_info; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/fcgiwrap.socket; } it's seems to work good when i call mydomain.com/cgi-bin/mailman/listinfo, but when I request something like : mydomain.com/cgi-bin/mailman/listinfo/mylist i get 403 and in nginx error log : FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/cgi-bin/mailman/listinfo)" while reading response header from upstream I tried every regex available to get it work but it still give 403 any help or any clue to get it work .

    Read the article

  • BGP Router reccomendations for simple redundancy [closed]

    - by Jona
    We have two sites that each have an internet connection and have a dedicated dark fibre between them. Each site has it's own IP space and we have an AS number. We're looking to be resilient to failure of the internet connection to either site and so need to buy a pair of approriate routers. Requirements are: Able to run 2 bgp sessions (one with the ISP, one with the other site router) Option to take a full table from the upstream ISPs would be nice. Able to provide HA gateways on the LAN side (e.g. 192.168.0.254 will automatically migrate if it's host router lost power) A dedicated device rather than a server running Linux / BSD Not crazy expensive. Any help / advice much appreciated.

    Read the article

  • Using nginx and/or varnish to cache server-generated 301 redirects

    - by rlotun
    I'm implementing a sort of url-shortener service. What happens is that I have some backend app server that takes in a request, does some computation and returns a 301 redirected url back upstream to an nginx frontend: request ---> nginx ----> app_server What I want to be able to do is cache this returned 301 url for the same request (a specific url with a "short code"). Does nginx do this caching automatically? Or should I drop in something like varnish in between nginx and the app_server? I can easily cache this in memcache, but that would require hitting the app_server, which I'm sure can be dispensed with after the first request. Thanks.

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • Update the remote of a git branch after name changing

    - by Dror
    Consider the following situation. A remote repository has two branches master and b1. In addition it has two clones repo1 and repo2 and both have b1 checked out. At some point, in repo1 the name of b1 was changed. As far as I can tell, the following is the right procedure to change the name of b1: $ git branch b1 b2 # changes the name of b1 to b2 $ git push remote :b1 # delete b1 remotely $ git push --set-upstream origin b2 # create b2 remotely and direct the local branch to track the remote 1 Now, afterwards, in repo2 I face a problem. git pull doesn't pull the changes from the branch (which is now called remotely b2). The error returned is: Your configuration specifies to merge with the ref 'b1' from the remote, but no such ref was fetched. What is the right way to do this? Both the renaming part and the updating in other clones?

    Read the article

  • How to configure ARR - Application Request Routing - to run both as web server and as as a gateway or proxy?

    - by Different111222
    I have this IIS7.5 with ARR installed and configured to reverse proxy to another server which is running IIS7. On that IIS7.5 I have applications and simple websites installed. Since configuring a farm, the local application doesn't run with this error message: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. Is it even possible to run both application and routing (reverse proxy) at the same time?

    Read the article

  • Nginx save file to local disk

    - by Dean Chen
    My case is: In our China company, we have to access one web server in USA headquarter through Internet. But network is too slow, and we download many big image files. All our developers have to wait. So we want to setup a Nginx which acts as reverse proxy, its upstream is our USA web server. Question is can we make Nginx save the image files from USA web server into its local disk? I mean let Nginx act as one cache server.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >