Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 396/792 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Wordpress domain mapping error but only from my connection

    - by Michael Shimmins
    Recently when trying to access a Wordpress blog from a my network I get this error: Warning! Domain mapping upgrade for this domain not found. Please log in and go to the Domains Upgrades page of your blog to use this domain. If I request that URL from another connection (ie: remote desktop into another network and try from there), it works perfectly. Anyone got suggestions/ideas on why from my network the site errors, but from others it loads?

    Read the article

  • How to fix SCRIPT_NAME with PHP-FPM and Apache's mod_fastcgi?

    - by Kyle MacFarlane
    I have the following in my Apache conf to get PHP-FPM working: FastCgiExternalServer /srv/www/fast-cgi-fake-handler -host 127.0.0.1:9000 AddHandler php-fastcgi .php AddType text/html .php Action php-fastcgi /var/www/cgi-bin Alias /var/www/cgi-bin /srv/www/fast-cgi-fake-handler DirectoryIndex index.php This works fine except that SCRIPT_NAME is always /var/www/cgi-bin and some scripts use SCRIPT_NAME to work out the location of the current script (vBulletin). Google has plenty of solutions for Nginx but not a word for Apache.

    Read the article

  • Windows 7: The audio service is not running?

    - by oscilatingcretin
    Why isn't my audio service running? It's so random. Sometimes I am able to change my PC volumn, other times I am not. I've restarted the following services: Remote Procedure Call (RPC) Multimedia Class Scheduler Windows Audio Endpoint Builder I think I started noticing this after I went mucking around in msconfig looking for ways to improve PC performance. I know I shut off some services, but that was over 6 months ago today and I am finally getting tired of it happening.

    Read the article

  • Configuring network entirely VMware Workstation

    - by MRB
    Hello everyone! You could help me how to configure a network entirely virtual with VMWare Workstation 6.5 which has the Windows Server 2003 and XP workstations, but all virtual server and workstation windows xp.Outra thing is through my Windows Server 2003 I will release Internet access for Windows workstations xp.Everyone I'd like to simulate at home as if in a company but everything virtual. My host is a Windows XP thanks!

    Read the article

  • Serving a video and audio upload and streaming intense site

    - by Pollux Khafra
    I'm about to launch a new site that allows user to both upload/stream audio and video and I don't know anything about the server side of things. My original plan was to just use a dedicated server through Hostgator but from what I'm reading, Cloud hosting or Load balanced clustered is the best way to go for what Im trying to do. All the articles seem to have an agenda to sell you on an affiliate web host so how do I really need to do this?

    Read the article

  • What does ldapsearch response mean?

    - by Martijn Burger
    I created a ldap directory with a number of users and groups. When I query this directory from a remote server with: ldapsearch -H ldap://ldap.myserver.net/ -x -vvvvvvv -b dc=myserver,dc=net -D cn=admin,dc=myserver,dc=net -W I get all objects in the directory returned. The result finishes with the following: # search result search: 2 result: 0 Success # numResponses: 85 # numEntries: 84 What do these numbers mean exactly?

    Read the article

  • Share the same subnet between Internal network and VPN Clients

    - by Pascal
    I would like to set up a configuration where VPN clients connecting to my Forefront TMG can access all the resources of my Internal network without having the to use the option "Use default gateway on remote network" on the VPN's TCP/IP Ipv4 Advanced Settings. This is important to me, since they can use their own internet while accessing my network through VPN (the security implications of this are acceptable on my cenario) My Internal network runs on 10.50.75.x, and I set up Forefront TMG to relay the DHCP of my Internal network to the VPN clients, so they get IPs from the same range as the Internal network. This setup initially works, and the VPN clients use their own internet, and can access anything that is on the internal network. However, after a while, HTTP Proxy Traffic from the Internal network starts getting routed to the IP of the RRAS Dial In Interface, instead of the IP of the Internal's network gateway. When this happens, the HTTP Proxy starts getting denied for obvious reasons. My first question is: does this happen because Forefront TMG wasn't designed to handle a cenario that I described above, and it "loses itself"? My second question is: Is there any way to solve this problem, either through configuration or firewall policies? My third question is: If there's no way that it can work with the cenario above, is there another cenario that will solve my problem, and do what I'd like it to do properly? Below are my network routes: 1 => Local Host Access => Route => Local Host => All Networks 2 => VPN Clients to Internal Network => Route => VPN Clients => Internal 3 => Internet Access => NAT => Internal, Perimeter, VPN Clients => External 4 => Internal to Perimeter => Route => Internal, VPN Clients => Perimeter Tks!

    Read the article

  • Multiple subdomains resolving to one domain?

    - by Matt
    I'm trying to host an application, but allow the customer to use their own sub domain so that it does not confuse their own customers. So for example, by application would run on app.mydomain.com, but my customer would access it via their own subdomain (app.somedomain.com and/or app.anotherdomain.com). Is this even possible? (Possibly relevant information: my application is hosted on media temple and uses Apache) Thanks!

    Read the article

  • VMWare Server with multiple processors

    - by user43046
    I have a new linux machine with two Core Duo CPUs. However, VMWare Server only recognizes one. In the host summary it shows: Processors: Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz 1 CPU x 2 Cores On another machine, it shows: Intel(R) Core(TM)2 Quad CPU Q6700 @ 2.66GHz 1 CPU x 4 Cores This machine also has two CPUs. Howcome VMWare is not seeing all the CPUs?

    Read the article

  • Allow SFTP to a single folder not in home directory

    - by Brandon
    I have a web server that I use to host my websites. I have all the websites in folders in /srv/www. There is a Wordpress site that I want to give SFTP access to another developer, how would I go about doing this so that they only have access to /srv/www/thesite.com and not any other directories? Running Ubuntu 9.04.

    Read the article

  • DNS works only with ip but does not work with NS CentOS + Bind9

    - by Borislav Yordanov
    I am having a headache with DNS. Lets say my public IP is 1.2.3.4, my local IP is 192.168.0.10 and my domain is example.com I am running CentOS on a virtual machine (Parallels Desktop for Mac) with a LAN card reserved for it, so it gets Ip directly from the router. I have ports 80,443,53 forwarded to 192.168.0.10. Both Mac OS and CentOs firewalls are Off. The strange is when I type dig @1.2.3.4 example.com from my other PC I get: ; <<>> DiG 9.8.3-P1 <<>> @1.2.3.4 example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16941 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 86400 IN A 1.2.3.4 ;; AUTHORITY SECTION: example.com. 86400 IN NS ns2.example.com. example.com. 86400 IN NS ns1.example.com. ;; ADDITIONAL SECTION: ns1.example.com. 86400 IN A 1.2.3.4 ns2.example.com. 86400 IN A 1.2.3.4 ;; Query time: 8 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Sat Nov 2 09:37:36 2013 ;; MSG SIZE rcvd: 109 but when i type: dig @ns1.example.com example.com it waits a few seconds and returns dig: couldn't get address for 'ns1.dsht.in': not found This is my config file: /etc/named.conf options { listen-on-v6 { none; }; directory"/var/named"; dump-file"/var/named/data/cache_dump.db"; statistics-file"/var/named/data/named_stats.txt"; memstatistics-file"/var/named/data/named_mem_stats.txt"; allow-query{ localhost; 192.168.0.0/24; }; allow-transfer { localhost; 192.168.0.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; # change all from here view "internal" { match-clients { localhost; 192.168.0.0/24; }; zone "." IN { type hint; file "named.ca"; }; zone "example.com" IN { type master; file "example.com.zone"; allow-update { none; }; }; zone "0.168.192.in-addr.arpa" IN { type master; file "0.168.192.in-addr.arpa"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; }; view "external" { match-clients { any; }; allow-query { any; }; recursion no; zone "example.com" IN { type master; file "example.com.zone"; allow-update { none; }; }; zone "4.3.2.1.in-addr.arpa" IN { type master; file "4.3.2.1.in-addr.arpa"; allow-update { none; }; }; }; /var/named/exmaple.com.zone $TTL 86400 @ IN SOA ns1.example.com. host.example.com. ( 2013042201 ;Serial 3600 ;Refresh 1800 ;Retry 604800 ;Expire 86400 ;Minimum TTL ) ; Specify our two nameservers IN NS ns1.example.com. IN NS ns2.example.com. ; Resolve nameserver hostnames to IP, replace with your two droplet IP addresses. ns1 IN A 1.2.3.4 ns2 IN A 1.2.3.4 ; Define hostname -> IP pairs which you wish to resolve @ IN A 1.2.3.4 IN A 1.2.3.4 www IN A 1.2.3.4 server2 IN A 192.168.0.2 * IN A 1.2.3.4 /var/named/4.3.2.1.in-addr.arpa $TTL 2d ; 172800 seconds $ORIGIN 4.3.2.1.IN-ADDR.ARPA. @ IN SOA ns1.example.com. host.example.com. ( 2013010304 ; serial number 3h ; refresh 15m ; update retry 3w ; expiry 3h ; nx = nxdomain ttl ) IN NS ns1.example.com. IN NS ns2.example.com. IN PTR example.com. ; etc /var/named/0.168.192.in-addr.arpa $TTL 2d ; 172800 seconds $ORIGIN 0.168.192.IN-ADDR.ARPA. @ IN SOA ns1.example.com. host.example.com. ( 2013010304 ; serial number 3h ; refresh 15m ; update retry 3w ; expiry 3h ; nx = nxdomain ttl ) IN NS ns1.example.com. IN NS ns2.example.com. 10 IN PTR example.com. 2 IN PTR server2.example.com ; etc I will be very glad if someone can help me. Thank you in advance

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • Does Chrome use a different DNS server while Firefox and IE use a different one?

    - by Jian Lin
    Is the common set up for Chrome that it will use a different DNS server, while Firefox and IE will use a different one? My Chrome (including the one on the virtual PC) will sometimes "Resolving host" and wait there for 20, 30 seconds, while Firefox and IE won't. (so after 20, 30 seconds with a blank page, the page will finally not be able to load). So is there something that Chrome is doing that make the difference?

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • Method to calculate downtime percentage

    - by Chris
    I need a calculation to work out the downtime percentage of a server. I am making a script that runs via cron every minute to check the uptime of a remote server. The two values I have to play with are number of checks run and times the checks failed (outages). Is this a plausible way of calculating it? I am thinking it must be but can't be too sure as my Maths skills are slipping away from me with age!

    Read the article

  • Node.js Production Server and Ubuntu Users

    - by baffonero
    I'm setting up a production server on Ubuntu 10.04 using this technology stack: Nodejs Nginx to serve static contents Mongo Redis Upstart for running applications as services Monit for monitoring node application and nginx server The server will host only 5 applications of this type. Nothing else. How would you setup Ubuntu Users? It's a good idea to create a User per Application? Would you install software (node, mongo...) as root or as user(s)? Thanks in advance

    Read the article

  • Looking for a new, free firewall (Sunbelt has a huge hole)

    - by Jason
    I've been using Sunbelt Personal Firewall v. 4.5 (previously Kerio). I've discovered that blocking Firefox connections in the configuration doesn't stop EXISTING Firefox connections. (See my post here yesterday http://superuser.com/questions/132625/sunbelt-firewall-4-5-wont-block-firefox) The "stop all traffic" may work on existing connections - but I'm done testing, as I need to be able to be selective, at any time. I was using the free version, so the "web filtering" option quit working after some time (mostly blocking ads and popups), but I didn't use that anyway. I used the last free version of Kerio before finally having to go to Sunbelt, because Kerio had an unfixed bug where you'd eventually get the BSOD and have to reset Kerio's configuration and start over (configure everything again). So I'm looking for a new Firewall. I don't like ZoneAlarm at all (no offense to all it's users that may be here - personal taste). I need the following: (Sunbelt has all these, except *) - 1. Be able to block in/out to localhost (trusted)/internet selectively for each application with a click (so there's 4 click boxes for each application) [*that effects everything immediately, regardless of what's already connected]. When a new application attempts a connection, you get an allow/deny/remember windows. - 2. Be able to easily set up filter rules for 'individual application'/'all applications,' by protocol, port/address (range), local, remote, in, out. [*Adding a filter rule also doesn't block existing connections in Sunbelt. That needs to work too.] - 3. Have an easy-to-get-to way to "stop all traffic" (like a right click option on the running icon in the task bar). - 4. Be able to set trusted/internet in/out block/allowed (4 things per item) for each of IGMP, ping, DNS, DHCP, VPN, and broadcasts. - 5. Define locahost as trusted/untrusted, define adapter connections as trusted/untrusted. - 6. Block incoming connetions during boot-up and shutdown. - 7. Show existing connections, including local & remote ip/port, protocol, current speed, total bytes transferred, and local ports opened for Listening. - 8. An Intrusion Prevention System which blocks (optionally select each one) known intrustions (long list). - 9. Block/allow applications from starting other applications (deny/allow/remember window). Wish list: A way of knowing what svchost.exe is doing - who is actually using it/calling it. I allowed it for localhost, and selectively allowed it for internet each time the allow/deny window came up. Thanks for any help/suggestions. (I'm using Windows XP SP3.)

    Read the article

  • are you supposed to be able to "ping" specific pages of websites, or just the domain name?

    - by Bec
    (sorry, i think my jargon is a bit off there, not sure) I'm trying to work out what's going on with my podcasts not downloading properly, to see whether it was my pod-catching software or the connection i tried doing a ping on the podcast URL e.g. www.abc.net.au/rn/podcast/feeds/ockham.xml and it failed (i got "could not find host"), it works for the first part of it though www.abc.net.au I can get to the xml page in a web browser though, and ping doesn't work on the podcasts which have been downloading right either.

    Read the article

  • Connect Cisco SBCS with Nortel BCM (H.323)

    - by D4
    Hi, I am wondering if its possible to connect a Cisco Unified Communications 500 (at a branch office) to Our main Site´s Nortel BCM as a remote Gateway for VOIP Comunication... I´ve had a little experience with branch office telephony but only with Nortel Techonology. don´t know if the Cisco SBCS will play along the other kids. :P thanks in advance... Any thought is more than welcome..

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Any tools for webvpn

    - by Stan
    OS: WinXP I need to Web VPN to RDP to remote host, but I don't want to use default java RDP window. So just wondering is there any tool that can do webvpn and provide a RDP function? Thanks.

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • How to access Jenkins remotely on Ubuntu 12.04 server?

    - by quincyglenn
    I have installed Jenkins and opened port 8080 on Ubuntu 12.04 server but still can't access Jenkins remotely. Below is the procedures I took. # Install Jenkins, enable UFW and open port 8080 sudo apt-get install jenkins sudo ufw enable sudo ufw allow 8080 sudo ufw reload # Check the status sudo ufw status 8080 ALLOW Anywhere 8080 ALLOW Anywhere (v6) # Locally curl -I localhost:8080 HTTP/1.1 200 OK Server: Winstone Servlet Engine v0.9.10 ... # On an external machine curl -I [ip]:8080 couldn't connect to host

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >