Search Results

Search found 2488 results on 100 pages for 'listen'.

Page 74/100 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Looking for software to read PDFs/web pages aloud on OS X

    - by Clinton Blackmore
    I am looking for software that will read PDFs and web pages aloud for me under OS X 10.5, preferably something that is free. I am aware that you can make your Mac read to you by pressing a key combination. It is pretty slick, but I really want something that: will allow me to say, "Read this document" and let me skip paragraphs and pause (instead of simply stopping and then restarting from the beginning) will allow me to skip things that aren't relevant, like page headers, footers, and side bars. will allow me to rewind and listen to something again (either to think on it more deeply, or to understand what the text-to-speech engine was trying to say) for a pdf with text in two columns, will let me read just one column at a time. (Right now if I make a selection, it gets both columns and reads from one and then from another. If I could just select one column and read it, I'd be happier. [IIRC, Apple improved things in Snow Leopard so you can select one column in a pdf.]) I don't really expect one program to do both pdfs and web pages, but it would be nice.

    Read the article

  • NGINX rewrite for vanity URLs when file doesn't exist (try_files and rewrite together)

    - by user1721724
    I'm trying to get vanity URLs on my server. If the file path from the URL doesn't exist, I want to rewrite the URL to profile.php, but if my users have periods in their usernames, their vanity URL doesn't work. Here is my conf block. server { listen 80; server_name www.example.com; rewrite ^/([a-zA-Z0-9-_]+)$ /profile.php?url=$1 last; root /var/www/html/example.com; error_page 404 = /404.php; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; log_not_found off; } location ~ \.php$ { fastcgi_pass example_fast_cgi; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/example.com$fastcgi_script_name; include fastcgi_params; } location / { index index.php index.html index.htm; } location ~ /\.ht { deny all; } location /404.php { internal; return 404; } } Any help would be appreciated. Thanks!

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • How to point any *.mydomain variation to localhost (for development)?

    - by user41339
    Hi all. I am developing a site, which will make use of any given [variation of] subdomain name part (that is, the part prefixed before the host name and, optionally, the TLD part). I would imagine that in production, that would be an easy feat - make sure the DNS for second-level domain name part points to an IP, set up Apache2 virtual host to listen on that (or any) IP port 80, and just use PHP to make decisions based on the "Host" request header. However, currently the site is localhost, since I am developing it using my workstation, so first I patched the /etc/hosts to include: 127.0.0.1 mydomain I only used one name part (arguably a custom TLD) so as to not interfere with the Internet domain names. Then I set up a VirtualHost directive for Apache 2.2 like: <VirtualHost *:80> ServerName mydomain But now I can see that f.e. example.mydomain does not point to localhost, meaning the the /etc/hosts addition is not effective for "something.mydomain". It appears the rules are taken verbatim, and also I have checked that wildcards like *.mydomain are not allowed. Is there a solution for this?

    Read the article

  • Using gitlab behind Apache proxy all generated urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ {edit to clarify} I've set up Apache to proxy and run the nginx server the package installed on port 8888 (or so I thought). As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from http://localhost:8888, and all the checkout urls Gitlab gives are also localhost - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain; [Update] It looks like nginx is still listening on the wrong port if I don't specify localhost:8888 as the external_url. I found this in /var/log/gitlab/nginx/error.log 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: still could not bind() Apache setup looks like: <VirtualHost *:80> ServerName my.local.domain ServerSignature Off ProxyPreserveHost On AllowEncodedSlashes NoDecode <Location /> ProxyPass http://localhost:8888/ ProxyPassReverse http://127.0.0.1:8888 ProxyPassReverse http://my.local.domain </Location> </VirtualHost> Which seems to proxy everything back ok if Gitlab listens on localhost:8888 - I just need Gitlab to start displaying the right URL, instead of localhost:8888.

    Read the article

  • Nginx and Gunicorn hanging on GET requests

    - by whatWhat
    I'm using Nginx + Gunicorn which is serving my Django project. All GET requests hang for ~1 min. The content seems to be available immediately as I can see it in the Browser inspector but the browser itself looks like it's still waiting for more data. Heres my Ngnix config #allow for up to 3 connections per second. limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s; server { listen 80; server_name example.com; root /var/www/example.com/example/; # serve directly - analogous for static/staticfiles location /media/ { # this changes depending on your python version root /home/example/; } location /static/ { # if asset versioning is used if ($query_string) { expires max; } root /var/www/example.com; } location / { #Allow for a burst of 50. limit_req zone=one burst=50 nodelay; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8001/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; } My Gunicorn Config: bind = "127.0.0.1:8001" workers = 3 worker_class = "gevent" Is there anything obvious that would be causing the requests to stay open for so long?

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Dovecot authentification not working

    - by user1488723
    I run a Ubuntu 10.04 VPS with Postfix and Dovecot installed. For a while I had problems with the mailserver itself (Postfix) but now it runs ok. I can telnet into it from localhost (telnet localhost 25 while logged in) and Im blocked if I try to do it from the outside (telnet mail.example.org 25). This is as it should be according to my main.cf However when I try to log in using Dovecot (openssl s_client -connect mail.example.com:993) I'm allowed in but denied when trying to identify myself as a user: Excerpt from Dovecot log in: Key-Arg : None Start Time: 1341074622 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. When I continue and try to log in to a specific user with the command: A001 login user password I get: A001 NO [AUTHENTICATIONFAILED] Authentication failed. I've reset the password to ensure it is correct and I know the user (user) exists on the system. When I do /etc/init.d/dovecot reload I get: /etc/init.d/dovecot: 29: maildir:~/Maildir: not found * Reloading IMAP/POP3 mail server dovecot [ OK ] Could it be that the mailboxes isn't found? Postfix main.cf: home_mailbox = Maildir/ mailbox_command = recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_loglevel = 1 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $mydomain Dovecot.conf: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/Maildir auth_verbose = yes mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz0123456789 protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Node js server not responding outside localhost centos

    - by David Martinez
    I'm running a basic express server from CentOS but for some reason it is not responding outside of localhost, I have tried everything I have found on google but nothing works so far. This is my express server: app.listen(3000,"0.0.0.0"); If I do curl http://localhost:3000/ in the server it works fine. If I curl to the ip of the server it doesn't work. I already changed my iptables num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 3 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 There is currently a apache server running on port 80 with no problems. I also tried setting a VirtualHost on apache but it didn't work either: <VirtualHost *:80> ServerName SubDOmain.MyDomain.com ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ ProxyPreserveHost on </VirtualHost> There is another virtual host working fine that redirects to another DocumentRoot. I'm running Node on root for testing purpose, but the node application owner is another user. All folders have 705 and files 664 Edit: I stopped apache and run my node app on port 80 and it working fine, I could access node app from my ip and domain.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Ubuntu 12.04 open port 80 inside WLAN

    - by Eduard
    I have an nginx server running on ubuntu 12.04 that serves http through port 80 and https through port 443. Everything works fine if I access it from the same computer via localhost, 127.0.0.1 or the local IP 192.168.0.11. If I try to access the server from another computer in the same VLAN it does not work for http; it works for https. I have changed my nginx configuration to also listen to port 8000 for http; I can then access http from the other computer in the same VLAN via "http://192.168.0.11:8000". I also have a web server running on port 80 on a windows machine and can access it from another device in the same VLAN, therefore the router is not blocking incoming http traffic. The nginx process is run by root. I have used tcpdump and I see that packets are arriving to Ubuntu: 192.168.0.16.49735 192.168.0.11.80 and that some response is being given 192.168.0.11.80 192.168.0.16.49735 (I do not know what the response is though). There is no request arriving at the nginx web server (I have checked the access log). I have iptables empty. I have unsuccessfully tried to find a solution for a long time to this, it has now become a matter of happiness or bitterness :).

    Read the article

  • Nginx - Serve blank page on "Bad Gateway" error

    - by TheLittleCheeseburger
    Hello all. I want to use Nginx as a simple reverse proxy, but if the server behind Nginx is down I just was to display a blank page. For some reason this configuration isn't displaying a blank page on error 502 and I can't figure out why. Thanks for your help! user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; # multi_accept on; } http { keepalive_timeout 65; proxy_read_timeout 200; upstream tornado { server 127.0.0.1:8001; } server { listen 80; server_name www.something.com; location / { error_page 502 = @blank; proxy_pass http://tornado; } location @blank { index index.html; root /web/blank; } } }

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • nginx phpmyadmin 404

    - by borannb
    I am trying to install phpmyadmin on my nginx web server. I installed phpmyadmin without a problem. I created subdomain for it. For security reasons I didnt call my subdomain "phpmyadmin" i used a different name. Then I used this config for my subdomain server { listen 80; server_name myphpmyadminsubdomain.domain.com; access_log off; error_log /srv/www/myphpmyadminsubdomain/error.log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; fastcgi_pass php; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } } Then I enabled it like this; /etc/nginx/sites-available/myphpmyadminsubdomain /etc/nginx/sites-enabled/myphpmyadminsubdomain I have restarted the nginx and go to myphpmyadminsubdomain.domain.com and it is giving me nginx 404 Not Found error. what am I doing wrong?

    Read the article

  • SSL Handshake negotiation on Nginx terribly slow

    - by Paras Chopra
    I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/ Here is my Nginx Conf: user www-data; worker_processes 4; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_proxied any; server_names_hash_bucket_size 128; } upstream abc { server 1.1.1.1 weight=1; server 1.1.1.2 weight=1; server 1.1.1.3 weight=1; } server { listen 443; server_name blah; keepalive_timeout 5; ssl on; ssl_certificate /blah.crt; ssl_certificate_key /blah.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://abc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?

    Read the article

  • server dosnt produce syn-ack

    - by steve
    I have a small program that take packets from the nfqueue . change the ip.dst to my server dst (and ttl), recalc checksum and return the packet to the nfqueue. The server and the client are linux and apache web server is run on the server and listen on port 80. i open telnet in the client to fake ip on port 80 . the packet is changed by my program and sent to the server, but the target server (the new dst ip) get the syn , but dosnt generate syn-ack (the server also belong to me , so i can see that it get the syn with checksum correct , but dosnt generate syn-ack). if i do the same , but with the real server ip as the dest, the tcp handshake is done correct (in this case i just change the ttl and checksum. The change that i did to the ttl is just a test to see that my checksum calc is ok). i compare the sys's , but didnt find and difference. Any idea? Ps. i saw this topic : Server not sending a SYN/ACK packet in response to a SYN packet and i set all flags the same , but this didnt help. Thank you

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >